url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www3.unisa.ac.za/yeats-books-hphfk/archive.php?page=0a1543-matlab-annotation-latex
Use dollar symbols around the text, for example, use '$\int_1^{20} x^2 dx$' for inline mode or '$$\int_1^{20} x^2 dx$$' for display mode. The problem I get is that the comment seems to be taken as part of the fraction, which unnecessarily extends the fraction bar. I wonder what is the difference between annotation() and text() functions in Matlab? 0. This is useful for including in the table in a matlab annotation, for example. To edit equation code, double-click the equation in … The maximum size of the text that you can use with the LaTeX interpreter is 1200 characters. The displayed text uses the default LaTeX font style. To use additional special characters, such as integral and summation symbols, you can use LaTeX markup instead. Find detailed answers to questions about coding, structures, functions, applications and libraries. I want the fraction to look just normally and have the comments not do anything to the size of the fraction. Each footnote is numbered sequentially - a process that, as you should have guessed by now, is automatically done for you. LaTeX Plot Annotation (https://www.mathworks.com/matlabcentral/fileexchange/35141-latex-plot-annotation), MATLAB Central File Exchange. using Laprint, Matfig2PGF, or any of the other converters available in File Exchange). Ioannis Filippidis (2021). Here I am sharing my code as follows. If you run a code like title('$$\frac{1}{2}$$','interpreter','latex','fontsize',20) Matlab will set the title '1/2' on a figure window, neatly written in LaTex. The displayed text uses the default LaTeX font style. MATLAB plotting with LaTeX First, here’s an example using Plotly’s MATLAB API. 1,045 1 1 gold badge 14 14 silver badges 29 29 bronze badges. To change the font style, use LaTeX markup. So I need to remove he old annotation. ", is there a way to increase the row height? LaTex in an annotation. Until I understood that the size of the text interpreted by the latex interpreter is a lot smaller than normal text size. A single command to set the xlabel, ylabel, zlabel, title and legend strings, all at once. How to rotate the annotation text box; Rotate annotation textbox; How to show an individual YTickLabel to the right of the single Y axis; How to position annotations in a figure with respect to the axes in MATLAB 7.2 (R2006a) How to make arrows; How to make the Xtick and Ytick labels of the axes utilize the LaTeX fonts in MATLAB 8.1 (R2013a) Based on your location, we recommend that you select: . 1. I want to add annotation to a histogram which will contain mean, standard deviation and number of sample. You can try some very different numbers like 1 and 100 to see if this is the case. To use LaTeX markup, set the interpreter to 'latex'. This example shows how to insert Greek letters, superscripts, and annotations into chart text and explains other available TeX options. Choose a web site to get translated content where available and see local events and offers. David_G David_G. Use dollar symbols around the text, for example, use '$\int_1^{20} x^2 dx$' for inline mode or '$$\int_1^{20} x^2 dx$$' for display mode. Find the treasures in MATLAB Central and discover how the community can help you! TEXT or ANNOTATION multiple Colors, Multiple Lines. For a list of supported symbols, see the documentation. Modifiers remain in effect until the end of the text. It is not changed from the code line. add a comment | 12. By default, MATLAB interprets text using TeX markup. For example, str(str==10)=[]; worked for me. Learn more about latex, tex, color, multiple lines, plot, annotation, textbox MATLAB This is a visualization of Bessel Functions of the first kind, solutions for a differential equation. For example, you can include mathematical expressions in text using LaTeX. Vote. For more symbols, you can use LaTeX markup by setting the Interpreter property to 'latex'. To change the font style, use LaTeX markup. Its done! Peter D. 2,484 2 2 gold badges 17 17 silver badges 22 22 bronze badges. By default, MATLAB ® supports a subset of TeX markup. Starting in R2014b, annotations cannot cross uipanel boundaries. Plotly will render LaTeX in annotations, labels, and titles. Follow 12 views (last 30 days) Brian Scannell on 8 Aug 2018. Learn more about latex, tex, color, multiple lines, plot, annotation, textbox MATLAB To display different text at each location, use a cell array. usage de la fabrication du LaTeX interprète, il ya toute une gamme de flèches (haut, bas, gauche, droite et d'autres angles ... Pour le positionnement des annotations, Matlab offre la fonction dsxy2figxy convertir des données de points de l'espace normalisé coordonnées de l'espace. Annotation Textbox Property Descriptions. Unable to complete the action because of changes made to the page. Change the order of the statements, add the curly braces, and it works: LaTeX Markup. The second one creates a different type of graphics text object which draws over all of the axes. Choose a web site to get translated content where available and see local events and offers. Using this function, it is not necessary to export the graphics object specifically for LaTeX (e.g. What I had to do was to right click on the legend in the figure window and then changed the 'interpreter' from 'tex' to 'latex'. You can set and query annotation object properties using the setand getfunctions and the Property Editor (displayed with the propertyeditorcommand). LaTeX Markup. MathWorks is the leading developer of mathematical computing software for engineers and scientists. The default “interpreter” used by Matlab for displaying text is “tex”. Basic annotation ¶. However, for more formatting options, you can use LaTeX markup instead. The FontName, FontWeight, and FontAngle properties do not have an effect. 18 Feb 2012. Hi. \tag is intended to be used a a one-off equation number as an alternative to just adding one to the last number used. The box can have a border and a background, or be invisible. However, the estimated size of this text is smaller than it really is, so the upper most part is cut off from the figure (hidden underneath the menubar) BackgroundColor ColorSpec Default: none. I want to portrait different math equation (latex) in static text (Tag:Eqn), according to the choice in pop up menu(Tag:popupmenu1). You may receive emails, depending on your. Expand row height in Latex table in textbox annotation. A common use case of text is to annotate some feature of the plot, and the annotate() method provides helper functionality to make annotations easy. The displayed text uses the default LaTeX font style. The FontName, FontWeight, and FontAngle properties do not have an effect. Browse other questions tagged matlab textbox annotations latex or ask your own question. Provide details and share your research! LATEX in Matlab The manipulation of gure annotation is very simple and straightforward. matlab2tikz à l'air vraiment sympa. To display the same text at each location, specify txt as a character vector or string. For LaTeX commands, see Insert LaTeX Equation. Select a Web Site. Color of textbox background. For example, text([0 1],[0 1],'my text'). Its done! I would like to annotate a plot with a textbox (with or without a visible box) with the following characteristics Use latex as interpreter Have variable inputs (i.e. The font will match with the LaTeX document (if in Computer Modern) and the size will always be proportional to the figure if scaled in the document, to avoid inconsistencies between annotation and graphics. Its done! 0 ⋮ Vote. In particular it becomes the default text for \ref.This gives some clues as to when to use one or the other. It is not changed from the code line. To use additional special characters, such as integral and summation symbols, you can use LaTeX markup instead. I guess it will be too late now, but I wanted to add that I was thinking to have the same problem. Annotate labels, title and legend using LaTeX strings. One only needs to utilize the basic Matlab functions such as title, xlabel, ylabel, and text. Describe Models Using Notes and Annotations. To use LaTeX markup, set the interpreter to 'latex'. The LaTeX interpreter can be turned off for a text object by setting the 'Interpreter' property to 'none'. Learn more about latex . This action changed the latex statement in the legend field to Math mode. MTEX is free and runs in standard Matlab. It is not changed from the code line. Reload the page to see its updated state. Specify x and y as two-element vectors of the form [x_begin x_end] and [y_begin y_end], respectively. Follow answered Jun 13 '13 at 0:43. I am trying to annotate individual terms of a mathematical expression, as depicted below. 6. Your ‘str’ variable must come before your annotation call. The Overflow Blog The Overflow #43: Simulated keyboards. ", is there a way to increase the row height? il y a aussi matlab2tikz et matlabfrag qui exporte au format eps avec l'annotation pour l'importation de LaTeX. This action changed the latex statement in the legend field to Math mode. In this post, we’ll show how it works. The displayed text uses the default LaTeX font style. Beginning and ending x-coordinates, specified as a two-element vector of the form [x_begin x_end].Together the x and y input arguments determine the endpoints of the line, arrow, double arrow, or text arrow annotation. This action changed the latex statement in the legend field to Math mode. Tim Tim. Moreover, it accepts LaTeX and TeX strings, so that equations and other symbols can be easily placed in the figure. You can describe your model with notes and annotations to help others to understand it. faisant usage de l'interprète LaTeX, il y a toute une gamme de flèches (haut, bas, gauche, droite et d'autres angles ... Pour la positionnement des annotations, Matlab offre la fonction dsxy2figxy pour convertir les points d'espace de données en coordonnées spatiales normalisées. MATLAB plotting with LaTeX First, here’s an example using Plotly’s MATLAB API. For multiline text, this reduces by about 10 characters per line. TEXT or ANNOTATION multiple Colors, Multiple Lines. using Laprint, Matfig2PGF, or any of the other converters available in File Exchange). Somehow, the textbox showed a number that increase for one unit every time I click on the choice in the pop up menu. LaTeX will obviously take care of typesetting the footnote at the bottom of the page. This example shows how to insert Greek letters, superscripts, and annotations into chart text and explains other available TeX options. Yes, when I try Matlab to write the title with the Latex font, I do not now why, but it does not work; however, the axis labels are correctly intrepeted and the command works perfectly. MathWorks is the leading developer of mathematical computing software for engineers and scientists. Thanks and regards! Specify lineType as 'line', 'arrow', 'doublearrow', or 'textarrow'. In this post, we’ll show how it works. Somehow, the textbox showed a number that increase for one unit every time I click on the choice in the pop up menu. Select a Web Site. By default, MATLAB ® supports a subset of TeX markup. demandé sur Bhargav Rao 2012-07-16 10:51:14. la source . I'm looking to annotate a figure made with Matlab with some bracket that would wrap around 3 lines. I tried the following code, but that has no effect: What I had to do was to right click on the legend in the figure window and then changed the 'interpreter' from 'tex' to 'latex'. You can add free-form text annotations anywhere in a MATLAB figure to help explain your data or bring attention to specific points in your data sets. Use the annotationfunction to create annotation objects and obtain their handles. By default, MATLAB ® supports a subset of TeX markup. This section lists the properties you can modify on an annotation ellipse object. Change the order of the statements, add the curly braces, and it works: '(1) $$y= \frac{1}{(1+x)^2} = \sum_{k=1}^\infty{k(-x)}^{k-1}$$', '(2) $$y= sin^2(x) = \sum_{k=1}^\infty{(-1)}^{k+1}\frac{2^{2k-1}x^{2k}}{(2k)! This is a visualization of Bessel Functions of the first kind, solutions for a differential equation. 8 ответов. 1. By default, MATLAB ® supports a subset of TeX markup. In the Edit Equation dialog box, enter LaTeX or MathML code to generate equations. We want to change it to “latex”. Some latex commands aren't understood by the interpreter (boo). It does not support the latex interpreter, so you can't use it for this job. Text in plot with symbol and fractions. A three-element RGB vector or one of the MATLAB predefined names, specifying the background color of the textbox. Properties You Can Modify . Annotation Textbox Property Descriptions Properties You Can Modify To change the font style, use LaTeX markup. An example is given in the figure linked: I have succeeded in adding a second legend. The maximum size of the text that you can use with the LaTeX interpreter is 1200 characters. When the plot data is changed using the gui, he old annotation remains and the new one is plotted over the old on. Learn more about latex, line break MATLAB You can also select a web site from the following list: Select the China site (in Chinese or English) for best site performance. To use LaTeX markup, set the Interpreter property for the Text object to 'latex'. Plotly will render LaTeX in annotations, labels, and titles. The displayed text uses the default LaTeX font style. Find the treasures in MATLAB Central and discover how the community can help you! annotation (lineType,x,y) creates a line or arrow annotation extending between two points in the current figure. Create scripts with code, output, and formatted text in a single executable document. In this video, I will show you how to import Matlab codes and figure in LaTeX. Other MathWorks country sites are not optimized for visits from your location. To use additional special characters, such as integral and summation symbols, you can use LaTeX markup instead. How can to add a line break in Latex?. 0 ⋮ Vote. Featured on Meta Responding to … 'latex' — Interpret characters using LaTeX markup. Read here for more details how fast Fourier transforms on the sphere and in the orientation space speed up texture computations. Accelerating the pace of engineering and science. If the comment is essentially part of the equation as in your examples, use \text you will never want see equation \ref{foo} to make Text for Multiple Data Points. It also supports the latex interpreter, but it's not going to pay attention to things like xlim and ylim. For the positioning of annotations, Matlab offers the function dsxy2figxy to convert data space points to normalized space coordinates. MTEX comes with binaries from the which build up the core of MTEX. It is not changed from the code line. One only needs to utilize the basic Matlab functions such as title, xlabel, ylabel, and text. To use LaTeX markup, set the interpreter to 'latex'. In this plot I add an annotation. Use dollar symbols around the text, for example, use '\int_1^{20} x^2 dx' for inline mode or '$$\int_1^{20} x^2 dx$$' for display mode. share | follow | edited Jan 22 '14 at 23 :08. Basic annotation ¶. Actually, the default interpreter in MATLAB for legend is 'tex', I guess. LaTex in an annotation. The third one creates a uicontrol with a text string in it. What I had to do was to right click on the legend in the figure window and then changed the 'interpreter' from 'tex' to 'latex'. A common use case of text is to annotate some feature of the plot, and the annotate() method provides helper functionality to make annotations easy. }$$', You may receive emails, depending on your. This example shows how to insert Greek letters, superscripts, and annotations into chart text and explains other available TeX options. In addition, Matlab’s text interpreter must be set to handle LATEX coding. matlab annotations. je viens de découvrir cette méthode, car je ne veux pas avoir à me soucier des unités normalisées. Moreover, it accepts LaTeX and TeX strings, so that equations and other symbols can be easily placed in the figure. I'm trying to insert a formula into an annotation on a graph, my code is. If you specify the text as a categorical array, MATLAB ® uses the values in the array, not the categories.. Figure Annotation M-File Publishing Matrices Matlab Function Syntax Example Figure Annotation LATEX in Matlab The manipulation of gure annotation is very simple and straightforward. For multiline text, this reduces by about 10 characters per line. Edited: Brian Scannell on 8 Aug 2018 Further to the very helpful answer to the question "How do I create a LaTeX table In a MATLAB text box? But I'm Using this function, it is not necessary to export the graphics object specifically for LaTeX (e.g. Accelerating the pace of engineering and science. Thus, you may think it is not working, but in fact, you only have to increase the font size value by a … Share. LaTeX lets you create lovely, complex mathematical functions from typed text. Choose a web site to get translated content where available and see local events and offers. One very quick fix improves the display of the numbers and labels on each axis. Until I understood that the size of the text interpreted by the latex interpreter is a lot smaller than normal text size. LaTeX lets you create lovely, complex mathematical functions from typed text. You are missing curly braces in the ‘\infty’ argument. LaTeX Interpreter. Learn more about latex . You can add notes to any system in the model hierarchy by entering text, showing website content, or inheriting note content from the parent system. But avoid … Asking for help, clarification, or responding to other answers. Previous versions of MATLAB ® allow annotations to extend into (or out of) the boundaries. Edited: Brian Scannell on 8 Aug 2018 Further to the very helpful answer to the question "How do I create a LaTeX table In a MATLAB text box? You can specify the interpreter to be used with the legend using the 'Interpreter' parameter/value pair input argument to the LEGEND function. The FontName, FontWeight, and FontAngle properties do not have an effect. This example shows how to insert Greek letters, superscripts, and annotations into chart text and explains other available TeX options. The text can be in any installed text font, and can include TeX or LaTeX markup. ... By default, MATLAB supports a subset of TeX markup. }$$'}; However, this doesn't seem to work, any help is greatly appreciated. View questions and answers from the MATLAB Central community. You can also select a web site from the following list: Select the China site (in Chinese or English) for best site performance. For GND and Taylor computation the optimization toolbox is required. Use dollar symbols around the text. First, we will set “TickLabelInterpreter” when we create the axes. The annotation extends from the point (x_begin, y_begin) to (x_end, y_end).By default, the units are normalized to the figure. I am afraid it is related to the use of sentences and symbols simultaneously, just as I did in the title, so I am wondering if there is a way to 'tell' Matlab what I want. taken from values within matlab environment) The uses of the basic text() will place text at an arbitrary position on the Axes. 0. I want to portrait different math equation (latex) in static text (Tag:Eqn), according to the choice in pop up menu(Tag:popupmenu1). Learn more about plot, symbol, latex, fractions MATLAB and Simulink Student Suite Updated https://www.mathworks.com/matlabcentral/answers/441352-latex-in-an-annotation#answer_357909. To use additional special characters, such as integral and summation symbols, you can use LaTeX markup instead. A single command to set the xlabel, ylabel, zlabel, title and legend strings, all at once. The displayed text uses the default LaTeX font style. The uses of the basic text() will place text at an arbitrary position on the Axes. Making statements based on opinion; back … Il est également matlab2tikz et matlabfrag qui exporte au format eps avec l'annotation pour l'importation de LaTeX. Choose a web site to get translated content where available and see local events and offers. One suggested option: return the LaTeX table string without saving to file. }$$', '(3) $$y=e^{-x^2} = \sum_{k=1}^\infty{(-1)}^k\frac{x^{2k}}{k! In what cases are one of them preferred over the other? The following Matlab project contains the source code and Matlab examples used for latex plot annotation. '(3)$$y=e^{-x^2} = \sum_{k=1}^\infty(-1)^k\frac{x^{2k}}{k! We’ll do this in 2 places. Troubleshooting, Bugs, Contact and Mailing List . The maximum size of the text that you can use with the LaTeX interpreter is 1200 characters. Thus, you may think it is not working, but in fact, you only have to increase the font size value by a larger number. Thanks for contributing an answer to TeX - LaTeX Stack Exchange! asked Apr 20 '10 at 20:57. Original L'auteur Salvador Dali | 2012-12-09  latex matlab vector-graphics. Actually, the default interpreter in MATLAB for legend is 'tex', I guess. This action changed the latex statement in the legend field to Math mode. How to communicate more deliberately and efficiently when working remotely. I could create a box and write the text using latex but I can't insert the value interactively. Expand row height in Latex table in textbox annotation. Based on your location, we recommend that you select: . To display an annotation within a specific figure, uipanel, or uitab, use the container input argument. Actually, the default interpreter in MATLAB for legend is 'tex', I guess. What I had to do was to right click on the legend in the figure window and then changed the 'interpreter' from 'tex' to 'latex'. You can add equations to your annotation by clicking the Insert Equation button in the annotation formatting toolbar. Other MathWorks country sites are not optimized for visits from your location. To use additional special characters, such as integral and summation symbols, you can use LaTeX markup instead. By default, MATLAB ® supports a subset of TeX markup. Use TeX markup to add superscripts and subscripts, modify the font type and color, and include special characters in the text. For multiline text, this reduces by about 10 characters per line. Yes, when I try Matlab to write the title with the Latex font, I do not now why, but it does not work; however, the axis labels are correctly intrepeted and the command works perfectly. Vote. Instead, they clip at the boundaries. This example shows how to insert Greek letters, superscripts, and annotations into chart text and explains other available TeX options. Follow 12 views (last 30 days) Brian Scannell on 8 Aug 2018. By default, MATLAB supports a subset of TeX markup. Its done! MTEX forum – Questions, … LaTeX Markup. Retrieved January 18, 2021. Actually, the default interpreter in MATLAB for legend is 'tex', I guess. I have a gui that includes a plot. Please be sure to answer the question. Note for anyone using the file output as the string in a matlab annotation: you may need to remove the newline characters first. Chicken Salpicao Chef Tatung, What Happened To The Wife Of Spartacus, Fish Fillet Sauce Panlasang Pinoy, Zulu Language Country, Kenma And Kuroo, Billbergia Lots Of Spots, Face Mask Clips, Caesars Palace Wikipedia, 1rk Flat On Rent In Wakad, Pune,
2021-04-23 10:38:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544976711273193, "perplexity": 2763.970916076415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00354.warc.gz"}
http://www.researchgate.net/publication/13586507_A_simple_method_of_sample_size_calculation_for_linear_and_logistic_regression
Article # A simple method of sample size calculation for linear and logistic regression. CSPCC, Department of Veterans Affairs, Palo Alto Health Care System (151-K), California 94304, USA. Statistics in Medicine (Impact Factor: 2.04). 08/1998; 17(14):1623-34. DOI:10.1002/(SICI)1097-0258(19980730)17:143.0.CO;2-S Source: PubMed ABSTRACT A sample size calculation for logistic regression involves complicated formulae. This paper suggests use of sample size formulae for comparing means or for comparing proportions in order to calculate the required sample size for a simple logistic regression model. One can then adjust the required sample size for a multiple logistic regression model by a variance inflation factor. This method requires no assumption of low response probability in the logistic model as in a previous publication. One can similarly calculate the sample size for linear regression models. This paper also compares the accuracy of some existing sample-size software for logistic regression with computer power simulations. An example illustrates the methods. 0 0 · 0 Bookmarks · 383 Views • ##### Article: Sample Size for Logistic Regression with Small Response Probability [hide abstract] ABSTRACT: The Fisher information matrix for the estimated parameters in a multiple logistic regression can be approximated by the augmented Hessian matrix of the moment-generating function for the covariates. The approximation is valid when the probability of response is small. With its use one can obtain a simple closed-form estimate of the asymptotic covariance matrix of the maximum likelihood parameter estimates, and thus approximate sample sizes needed to test hypotheses about the parameters. The method is developed for selected distributions of a single covariate and for a class of exponential-type distributions of several covariates. It is illustrated with an example concerning risk factors for coronary heart disease. Journal of The American Statistical Association - J AMER STATIST ASSN. 01/1981; 76(373):27-32. • Source ##### Article: Sample size tables for logistic regression. [hide abstract] ABSTRACT: Sample size tables are presented for epidemiologic studies which extend the use of Whittemore's formula. The tables are easy to use for both simple and multiple logistic regressions. Monte Carlo simulations are performed which show three important results. Firstly, the sample size tables are suitable for studies with either high or low event proportions. Secondly, although the tables can be inaccurate for risk factors having double exponential distributions, they are reasonably adequate for normal distributions and exponential distributions. Finally, the power of a study varies both with the number of events and the number of individuals at risk. Statistics in Medicine 08/1989; 8(7):795-802. · 2.04 Impact Factor • ##### Article: Sample size calculations for studies with correlated observations. [hide abstract] ABSTRACT: Correlated data occur frequently in biomedical research. Examples include longitudinal studies, family studies, and ophthalmologic studies. In this paper, we present a method to compute sample sizes and statistical powers for studies involving correlated observations. This is a multivariate extension of the work by Self and Mauritsen (1988, Biometrics 44, 79-86), who derived a sample size and power formula for generalized linear models based on the score statistic. For correlated data, we appeal to a statistic based on the generalized estimating equation method (Liang and Zeger, 1986, Biometrika 73, 13-22). We highlight the additional assumptions needed to deal with correlated data. Some special cases that are commonly seen in practice are discussed, followed by simulation studies. Biometrics 10/1997; 53(3):937-47. · 1.41 Impact Factor
2013-12-08 23:42:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857797384262085, "perplexity": 1035.8455208345863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163828351/warc/CC-MAIN-20131204133028-00065-ip-10-33-133-15.ec2.internal.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.110.230801
# Synopsis: A Distant Second By measuring hydrogen line emission with an atomic clock hundreds of kilometers away, researchers place strict limits on possible corrections to relativity. Light emission from hydrogen atoms allows spectacularly precise confirmation of quantum-mechanical laws. But theorists have yet to fully reconcile those laws with relativity, the other major foundation of modern physics. In Physical Review Letters, a multilaboratory collaboration reports improved hydrogen measurements that place limits on how big one possible correction to relativity could be. Researchers at the Max Planck Institute for Quantum Optics in Garching, Germany, have pioneered methods that connect optical emission frequencies to the much lower radio frequencies of atomic clocks. But the best atomic clocks, based on a fountain of cesium atoms, are in distant labs such as the Federal Physical-Technical Institute (PTB) in Braunschweig, and can’t be easily moved. So the two labs synchronized their setups by sending light signals back and forth over a $920$-$\text{km}$-long optical fiber. The connection allowed them to express the $1S$-$2S$ transition frequency in terms of the international standard definition of the second as $2,466,061,413,187,018$ hertz, with an uncertainty of just $11$ hertz. The researchers exploited the unprecedented precision to look for variations of the frequency over a year. Such variations would show that the frequency depends on the motion of the Earth around the Sun, which is forbidden by relativity. But the team estimates that parameters that quantify that dependence can be no larger than a few parts in ${10}^{11}$. One of the parameters is slightly different from zero, but even more precise measurements will be needed to determine if this difference is truly significant. – Don Monroe More Features » ### Announcements More Announcements » ## Subject Areas Atomic and Molecular Physics ## Previous Synopsis Quantum Information ## Next Synopsis Biological Physics ## Related Articles Atomic and Molecular Physics ### Synopsis: Detecting a Molecular Duet Using a scanning tunneling microscope, researchers detect coupled vibrations between two molecules. Read More » Atomic and Molecular Physics ### Viewpoint: How to Create a Time Crystal A detailed theoretical recipe for making time crystals has been unveiled and swiftly implemented by two groups using vastly different experimental systems. Read More » Atomic and Molecular Physics ### Viewpoint: What Goes Up Must Come Down A molecular fountain, which launches molecules rather than atoms and allows them to be observed for long times, has been demonstrated for the first time. Read More »
2017-01-21 19:41:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4309280514717102, "perplexity": 2016.7379599633084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00287-ip-10-171-10-70.ec2.internal.warc.gz"}
https://gilberttanner.com/blog/creating-your-own-objectdetector
# Creating your own object detector with the Tensorflow Object Detection API Object detection is the craft of detecting instances of a certain class, like animals, humans and many more in an image or video. The Tensorflow Object Detection API makes it easy to detect objects by using pretrained object detection models, as explained in my last article. In this article, we will go through the process of training your own object detector for whichever objects you like. I chose to create an object detector which can distinguish between four different  microcontrollers. ### Introduction In this article, we will go over all the steps needed to create our object detector from gathering the data all the way to testing our newly created object detector. If you don’t have the Tensorflow Object Detection API installed yet you can watch my tutorial on it. The steps needed are: 1. Gathering data 2. Labeling data 3. Generating TFRecords for training 4. Configuring training 5. Training model 6. Exporting inference graph 7. Testing object detector ### Gathering data Before we can get started creating the object detector we need data, which we can use for training. To train a robust classifier, we need a lot of pictures which should differ a lot from each other. So they should have different backgrounds, random object, and varying lighting conditions. You can either take the pictures yourself or you can download them from the internet. For my microcontroller detector, I took about 25 pictures of each individual microcontroller and 25 pictures containing multiple microcontrollers. These images are pretty big because they have a high resolution so we want to transform them to a lower scale so the training process is faster. I wrote a little script that makes it easy to transform the resolution of images. from PIL import Image import os import argparse def rescale_images(directory, size): for img in os.listdir(directory): im = Image.open(directory+img) im_resized = im.resize(size, Image.ANTIALIAS) im_resized.save(directory+img) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Rescale images") parser.add_argument('-d', '--directory', type=str, required=True, help='Directory containing the images') parser.add_argument('-s', '--size', type=int, nargs=2, required=True, metavar=('width', 'height'), help='Image size') args = parser.parse_args() rescale_images(args.directory, args.size) To use the script we need to save it in the parent directory of the images as something like transform_image_resolution.py and then go into the command line and type: python transform_image_resolution.py -d images/ -s 800 600 ### Labeling data Now that we have our images we need to move about 80 percent of the images into the object_detection/images/train directory and the other 20 percent in the object_detection/images/test directory. In order to label our data, we need some kind of image labeling software. LabelImg is a great tool for labeling images. It’s also freely available on Github and prebuilts can be downloaded easily. LabelImg Github After downloading and opening LabelImg you can open the training and testing directory using the “Open Dir” button. To create the bounding box the “Create RectBox” button can be used. After creating the bounding box and annotating the image you need to click save. This process needs to be repeated for all images in the training and testing directory. ### Generating TFRecords for training With the images labeled, we need to create TFRecords that can be served as input data for training of the object detector. In order to create the TFRecords we will use two scripts from Dat Tran’s raccoon detector. Namely the xml_to_csv.py and generate_tfrecord.py files. After downloading both scripts we can first of change the main method in the   xml_to_csv file so we can transform the created xml files to csv correctly. # Old: def main(): image_path = os.path.join(os.getcwd(), 'annotations') xml_df = xml_to_csv(image_path) xml_df.to_csv('raccoon_labels.csv', index=None) print('Successfully converted xml to csv.') # New: def main(): for folder in ['train', 'test']: image_path = os.path.join(os.getcwd(), ('images/' + folder)) xml_df = xml_to_csv(image_path) xml_df.to_csv(('images/'+folder+'_labels.csv'), index=None) print('Successfully converted xml to csv.') Now we can transform our xml files to csvs by opening the command line and typing: python xml_to_csv.py These creates two files in the images directory. One called test_labels.csv and another one called train_labels.csv. Before we can transform the newly created files to TFRecords we need to change a few lines in the generate_tfrecords.py file. From: # TO-DO replace this with label map def class_text_to_int(row_label): return 1 elif row_label == 'shirt': return 2 elif row_label == 'shoe': return 3 else: return None To: def class_text_to_int(row_label): if row_label == 'Raspberry_Pi_3': return 1 elif row_label == 'Arduino_Nano': return 2 elif row_label == 'ESP8266': return 3 elif row_label == 'Heltec_ESP32_Lora': return 4 else: return None If you are using a different dataset you need to replace the class-names with your own. Now the TFRecords can be generated by typing: python generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record These two commands generate a train.record and a test.record file which can be used to train our object detector. ### Configuring training The last thing we need to do before training is to create a label map and a training configuration file. #### Creating a label map The label map maps an id to a name. We will put it in a folder called training, which is located in the object_detection directory. The labelmap for my detector can be seen below. item { id: 1 name: 'Raspberry_Pi_3' } item { id: 2 name: 'Arduino_Nano' } item { id: 3 name: 'ESP8266' } item { id: 4 name: 'Heltec_ESP32_Lora' } The id number of each item should match the id of specified in the generate_tfrecord.py file. #### Creating a training configuration Now we need to create a training configuration file. Because as my model of choice I will use faster_rcnn_inception, which just like a lot of other models can be downloaded from this page I will start with a sample config ( faster_rcnn_inception_v2_pets.config ), which can be found in the sample folder. First of I will copy the file into the training folder and then I will open it using a text editor in order to change a few lines in the config. Line 9: change the number of classes to number of objects you want to detect (4 in my case) Line 106: change fine_tune_checkpoint to the path of the model.ckpt file: fine_tune_checkpoint: "C:/Users/Gilbert/Downloads/Other/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt" Line 123: change input_path to the path of the train.records file: input_path: "C:/Users/Gilbert/Downloads/Other/models/research/object_detection/train.record" Line 135: change input_path to the path of the test.records file: input_path: "C:/Users/Gilbert/Downloads/Other/models/research/object_detection/test.record" Line 125 and 137: change label_map_path to the path of the label map: label_map_path: "C:/Users/Gilbert/Downloads/Other/models/research/object_detection/training/labelmap.pbtxt" Line 130: change num_example to the number of images in your test folder. ### Training model To train the model we will use the train.py file, which is located in the object_detection/legacy folder. We will copy it into the object_detection folder and then we will open a command line and type: Update: Use the model_main file in the object_detection folder instead. python model_main.py --logtostderr --model_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config If everything was setup correctly the training should begin shortly. About every 5 minutes the current loss gets logged to Tensorboard. We can open Tensorboard by opening a second command line, navigating to the object_detection folder and typing: tensorboard --logdir=training This will open a webpage at localhost:6006. You should train the model until it reaches a satisfying loss. The training process can then be terminated by pressing Ctrl+C. ### Exporting inference graph Now that we have a trained model we need to generate an inference graph, which can be used to run the model. For doing so we need to first of find out the highest saved step number. For this, we need to navigate to the training directory and look for the model.ckpt file with the biggest index. Then we can create the inference graph by typing the following command in the command line. python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/faster_rcnn_inception_v2_pets.config --trained_checkpoint_prefix training/model.ckpt-XXXX --output_directory inference_graph XXXX represents the highest number. ### Testing object detector In order to test our newly created object detector, we can use the code from my last Tensorflow object detection tutorial. We only need to replace the fourth code cell. From: # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that are used to add a correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') To: MODEL_NAME = 'inference_graph' PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' PATH_TO_LABELS = 'training/labelmap.pbtxt' Now we can run all the cells and we will see a new window with a camera stream opening. ### Conclusion The Tensorflow Object Detection API allows you to create your own object detector using transfer learning. If you liked this article consider subscribing on my Youtube Channel and following me on social media.
2020-04-03 00:38:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2151390165090561, "perplexity": 3973.7625041226497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370509103.51/warc/CC-MAIN-20200402235814-20200403025814-00268.warc.gz"}
https://gmatclub.com/forum/for-how-many-values-of-k-is-12-12-the-least-common-multiple-86737.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Jan 2019, 17:41 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### FREE Quant Workshop by e-GMAT! January 20, 2019 January 20, 2019 07:00 AM PST 07:00 AM PST Get personalized insights on how to achieve your Target Quant Score. • ### GMAT Club Tests are Free & Open for Martin Luther King Jr.'s Birthday! January 21, 2019 January 21, 2019 10:00 PM PST 11:00 PM PST Mark your calendars - All GMAT Club Tests are free and open January 21st for celebrate Martin Luther King Jr.'s Birthday. # For how many values of k is 12^12 the least common multiple Author Message TAGS: ### Hide Tags Manager Joined: 19 Nov 2007 Posts: 179 For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags Updated on: 30 Jun 2016, 07:51 2 29 00:00 Difficulty: 85% (hard) Question Stats: 41% (02:08) correct 59% (02:18) wrong based on 225 sessions ### HideShow timer Statistics For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 Originally posted by jade3 on 12 Nov 2009, 03:38. Last edited by Bunuel on 30 Jun 2016, 07:51, edited 2 times in total. Edited the question and added the OA Math Expert Joined: 02 Sep 2009 Posts: 52296 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 30 Mar 2012, 00:29 5 6 essarr wrote: sriharimurthy wrote: Thus, k will also be in the form of : $$(2^a)*(3^b)$$ Hi, I'm trying to understand this question.. the explanation seems good, but I still can't seem to get a grasp of it.. why can we say that K is also in the form of $$(2^a)*(3^b)$$ ?? also, how do we consider the $$6^6$$ term in this explanation? any help is appreciated, Thanks! For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 We are given that $$12^{12}=2^{24}*3^{12}$$ is the least common multiple of the following three numbers: $$6^6=2^6*3^6$$; $$8^8 = 2^{24}$$; and $$k$$; First notice that $$k$$ cannot have any other primes other than 2 or/and 3, because LCM contains only these primes. Now, since the power of 3 in LCM is higher than the powers of 3 in either the first number or in the second, than $$k$$ must have $$3^{12}$$ as its multiple (else how $$3^{12}$$ would appear in LCM?). Next, $$k$$ can have 2 as its prime in ANY power ranging from 0 to 24, inclusive (it cannot have higher power of 2 since LCM limits the power of 2 to 24). For example $$k$$ can be: $$2^0*3^{12}=3^{12}$$; $$2^1*3^{12}$$; $$2^2*3^{12}$$; ... $$2^{24}*3^{12}=12^{12}=LCM$$. So, $$k$$ can take total of 25 values. Hope it helps. _________________ Manager Joined: 29 Oct 2009 Posts: 196 GMAT 1: 750 Q50 V42 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 12 Nov 2009, 04:12 9 1 For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 $$6^6 = (2^6)*(3^6)$$ $$8^8 = 2^{24}$$ Now we know that the least common multiple of the above two numbers and k is: $$12^{12} = (2*2*3)^{12} = (2^{24})*(3^{12})$$ Thus, k will also be in the form of : $$(2^a)*(3^b)$$ Now, b has to be equal to 12 since in order for $$(2^{24})*(3^{12})$$ to be a common multiple, at least one of the numbers must have the terms $$2^{24}$$ and $$3^{12}$$ as its factors. (not necessarily the same number). We can see that $$8^8$$ already takes care of the $$2^{24}$$ part. Thus, k has to take care of the $$3^{12}$$ part of the LCM. This means that the value k is $$(2^a)*(3^{12})$$ where a can be any value from 0 to 24 (both inclusive) without changing the value of the LCM. Thus K can have 25 values. Choice (c). Cheers. _________________ Click below to check out some great tips and tricks to help you deal with problems on Remainders! http://gmatclub.com/forum/compilation-of-tips-and-tricks-to-deal-with-remainders-86714.html#p651942 1) Translating the English to Math : http://gmatclub.com/forum/word-problems-made-easy-87346.html ##### General Discussion Manager Joined: 29 Oct 2009 Posts: 196 GMAT 1: 750 Q50 V42 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 12 Nov 2009, 04:15 Quote: $$8^8 = 2^24$$ 8^8 = 2^(24) Similarly for the other numbers. Sorry for that confusion. Wasn't able to get a 2 digit power using the math function. If any one knows how to do it please do let me know. Cheers. _________________ Click below to check out some great tips and tricks to help you deal with problems on Remainders! http://gmatclub.com/forum/compilation-of-tips-and-tricks-to-deal-with-remainders-86714.html#p651942 1) Translating the English to Math : http://gmatclub.com/forum/word-problems-made-easy-87346.html Math Expert Joined: 02 Sep 2009 Posts: 52296 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 12 Nov 2009, 04:42 sriharimurthy wrote: Quote: $$8^8 = 2^24$$ 8^8 = 2^(24) Similarly for the other numbers. Sorry for that confusion. Wasn't able to get a 2 digit power using the math function. If any one knows how to do it please do let me know. Cheers. Edited your post. Please check if I didn't mess it up accidentally. To get two digit power just put the power in {}, eg. 2^{24} and mark with [m] button. _________________ Manager Joined: 29 Oct 2009 Posts: 196 GMAT 1: 750 Q50 V42 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 12 Nov 2009, 04:53 Bunuel wrote: Edited your post. Please check if I didn't mess it up accidentally. To get two digit power just put the power in {}, eg. 2^{24} and mark with m button. Nope, you didn't mess it up.. Only made it better! Thanks Brunel! Infact $$thanks^{10}$$ !! _________________ Click below to check out some great tips and tricks to help you deal with problems on Remainders! http://gmatclub.com/forum/compilation-of-tips-and-tricks-to-deal-with-remainders-86714.html#p651942 1) Translating the English to Math : http://gmatclub.com/forum/word-problems-made-easy-87346.html Intern Joined: 22 Jan 2012 Posts: 20 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 29 Mar 2012, 13:54 sriharimurthy wrote: Thus, k will also be in the form of : $$(2^a)*(3^b)$$ Hi, I'm trying to understand this question.. the explanation seems good, but I still can't seem to get a grasp of it.. why can we say that K is also in the form of $$(2^a)*(3^b)$$ ?? also, how do we consider the $$6^6$$ term in this explanation? any help is appreciated, Thanks! Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8795 Location: Pune, India Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 30 Mar 2012, 03:43 6 1 essarr wrote: sriharimurthy wrote: Thus, k will also be in the form of : $$(2^a)*(3^b)$$ Hi, I'm trying to understand this question.. the explanation seems good, but I still can't seem to get a grasp of it.. why can we say that K is also in the form of $$(2^a)*(3^b)$$ ?? also, how do we consider the $$6^6$$ term in this explanation? any help is appreciated, Thanks! Here is my explanation: LCM (Least Common Multiple) of 3 numbers a, b and c would be a multiple of each of these 3 numbers. So for every prime factor in these numbers, LCM would have the highest power available in any number e.g. $$a = 2*5$$ $$b = 2*5*7^2$$ $$c = 2^4*5^2$$ What is the LCM of these 3 numbers? It is $$2^4*5^2*7^2$$ Every prime factor will be included and the power of every prime factor will be the highest available in any number. So if, $$a = 2^6*3^6$$ $$b = 2^{24}$$ k = ? LCM $$= 2^{24}*3^{12}$$ What values can k take? First of all, LCM has $$3^{12}$$. From where did it get $$3^{12}$$? a and b have a maximum $$3^6$$. This means k must have $$3^{12}$$. Also, LCM has $$2^{24}$$ which is available in b. So k needn't have $$2^{24}$$. It can have 2 to any power as long as it is less than or equal to 24. k can be $$2^{0}*3^{12}$$ or $$2^{1}*3^{12}$$ or $$2^{2}*3^{12}$$ ... $$2^{24}*3^{12}$$ The power of 2 in k cannot exceed 24 because then, the LCM would have the higher power. What about some other prime factor? Can k be $$2^{4}*3^{12}*5$$? No, because then the LCM would have 5 too. So k can take 25 values only _________________ Karishma Veritas Prep GMAT Instructor Intern Joined: 22 Jan 2012 Posts: 20 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 31 Mar 2012, 11:48 ahhhhh I see it now; thanks so much bunuel & karishma, that clarified it Manager Joined: 15 Aug 2013 Posts: 247 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 23 Aug 2014, 09:21 Bunuel wrote: essarr wrote: For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 We are given that $$12^{12}=2^{24}*3^{12}$$ is the least common multiple of the following three numbers: $$6^6=2^6*3^6$$; $$8^8 = 2^{24}$$; and $$k$$; First notice that $$k$$ cannot have any other primes other than 2 or/and 3, because LCM contains only these primes. Now, since the power of 3 in LCM is higher than the powers of 3 in either the first number or in the second, than $$k$$ must have $$3^{12}$$ as its multiple (else how $$3^{12}$$ would appear in LCM?). Next, $$k$$ can have 2 as its prime in ANY power ranging from 0 to 24, inclusive (it cannot have higher power of 2 since LCM limits the power of 2 to 24). For example $$k$$ can be: $$2^0*3^{12}=3^{12}$$; $$2^1*3^{12}$$; $$2^2*3^{12}$$; ... $$2^{24}*3^{12}=12^{12}=LCM$$. So, $$k$$ can take total of 25 values. Hope it helps. Hi Bunuel, I can see why K needs to have 3^12, but can't K have other values with the base 2? Meaning, why does the range only go from 2^0 to 2^24, why can't it be 2^-5 etc? Intern Joined: 16 Nov 2015 Posts: 17 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 08 Dec 2015, 05:16 For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 There are 3 numbers: 6^6 (in prime factors that is 2^6 * 3^6), 8^8 (that is 2^24) and k. LCM of these three numbers is given as: 12^12 (that is 3^12 * 2^24 ) First we can ignore k and find the LCM of the given two numbers (2^6 * 3^6) and (2^24) That is => 3^6 * 2^24 (Note that LCM of any two -or more- numbers is the product of all distinct prime factors with the greatest powers.) So if 3^6 * 2^24 (LCM of the given two numbers) and k has a LCM of 3^12 * 2^24 then k must have the factor 3^12 (this is a necessity because other number is limited with 2^6 ) On the other hand -besides 3^12- k can take prime 2 to the power of 0 to 24 (2^0 to 2^24) Therefore k can be any of the following: (3^12 and 2^0) or (3^12 and 2^1) or (3^12 and 2^2), ....., (3^12 and 2^24) that is 25 in total. (I think this is a 700 level question) Current Student Joined: 12 Aug 2015 Posts: 2626 Schools: Boston U '20 (M) GRE 1: Q169 V154 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 14 Mar 2016, 00:31 Director Joined: 04 Jun 2016 Posts: 570 GMAT 1: 750 Q49 V43 For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags Updated on: 01 Jul 2016, 00:38 For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 Lets use a quick example What is the LCM of 2,4,9,12, Factorise all the numbers one by one and write them in prime numbers raised to exponent form $$(Prime1)^m$$ X $$(Prime2)^m$$... $$2=2^1$$ $$4=2*2==>2^2$$ $$9=3*3==>3^2$$ $$12=4*3==>2*2*3==>2^2 * 3^1$$ NOW LCM OF THESE NUMBERS WILL TAKE THE HIGHEST POWER OF EACH PRIME FROM EACH NUMBER (ONE TIMES ONLY) So LCM = $$2^2 * 3^3$$==> 4*9=36 Notice how $$2^1$$ and $$3^1$$ are not contributing towards the LCM at all. Now apply the same logic to your question You already know LCM is = $$12^{12}=(4*3)^{12}$$==> $$(2^2*3)^{12}$$ ==> $$2^{24}*3^{12}$$ Similarly $$6^6= (2*3)^6==>2^6*3^6$$ So we know $$6^6$$ is neither contributing 2's or 3's towards the LCM $$8^8= (2^3)^8==> 2^{24}$$ , So we know 8 is contributing all the $$2^{24}$$towards our LCM Now we need a $$3^{12}$$ to reach the LCM Since K is the only remaining digit therefore K must contribute $$3^{12}$$ but it is also possible K can or cannot have $$2^m$$ in it also and the values of $$2^m$$ can vary from $$2^0 to 2^{24}$$ Remember for LCM we take the highest power, so $$2^{24}$$ can be common in $$8^8$$ as well as K therefore total values of 2 in k = $$2^0 to 2^{24}$$ (Total=25) and one compulsory value of $$3^{12}$$ (total= 1) Total=26 values Why am in overshooting by 1? _________________ Posting an answer without an explanation is "GOD COMPLEX". The world doesn't need any more gods. Please explain you answers properly. FINAL GOODBYE :- 17th SEPTEMBER 2016. .. 16 March 2017 - I am back but for all purposes please consider me semi-retired. Originally posted by LogicGuru1 on 30 Jun 2016, 10:09. Last edited by LogicGuru1 on 01 Jul 2016, 00:38, edited 1 time in total. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8795 Location: Pune, India Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 30 Jun 2016, 23:39 For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 Quote: (Total=25) and one compulsory value of 3^12 Total=26 values Here is the problem in your solution. When you say the possible values vary from 2^0 to 2^24 (that is 25 values) AND another value is 3^12, you are double counting 3^12. Note that 2^0 = 1. So 2^0*3^12 = 3^12 Hence you have only 25 values. _________________ Karishma Veritas Prep GMAT Instructor Director Joined: 04 Jun 2016 Posts: 570 GMAT 1: 750 Q49 V43 For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 01 Jul 2016, 00:50 VeritasPrepKarishma wrote: For how many values of k is 12^12 the least common multiple of the positive integers 6^6, 8^8 and k? A. 23 B. 24 C. 25 D. 26 E. 27 Quote: (Total=25) and one compulsory value of 3^12 Total=26 values Here is the problem in your solution. When you say the possible values vary from 2^0 to 2^24 (that is 25 values) AND another value is 3^12, you are double counting 3^12. Note that 2^0 = 1. So 2^0*3^12 = 3^12 Hence you have only 25 values. Thanks Karishma , Just to clarify one more doubt, 4= 2*2 = $$2^2$$ ==> The genreal form is $$2^q$$ Total possible factors of 4 = q+1 = 2+1 = 3 {1,2,4} IS this the same thing that you mentioned :- I am counting 1 in $$3^{12}$$ and also in $$2^0$$ and i need to drop it one time.? RIGHT ?? In all such questions, does one need to ignore "1" in the final count ? _________________ Posting an answer without an explanation is "GOD COMPLEX". The world doesn't need any more gods. Please explain you answers properly. FINAL GOODBYE :- 17th SEPTEMBER 2016. .. 16 March 2017 - I am back but for all purposes please consider me semi-retired. Current Student Status: DONE! Joined: 05 Sep 2016 Posts: 374 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 29 Nov 2016, 17:30 K can take on any of the following values: (3^12) (3^12)(2) (3^12)(2^2) (3^12)(2^3) (3^12)(2^4) (3^12)(2^5) (3^12)(2^6) (3^12)(2^7) (3^12)(2^8) (3^12)(2^9) (3^12)(2^10) (3^12)(2^11) (3^12)(2^12) (3^12)(2^13) . . . (3^12)(2^24) Thus, there are 25 values that K can take on. C. Non-Human User Joined: 09 Sep 2013 Posts: 9454 Re: For how many values of k is 12^12 the least common multiple  [#permalink] ### Show Tags 26 Jan 2018, 11:30 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: For how many values of k is 12^12 the least common multiple &nbs [#permalink] 26 Jan 2018, 11:30 Display posts from previous: Sort by
2019-01-21 01:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6994019746780396, "perplexity": 1763.9679103909295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00235.warc.gz"}
https://jedward706.wordpress.com/2008/11/03/format-for-math-and-science-homework-latex/
# Life and Musings of Ed ## Format for Math and Science Homework – LaTeX In Math Tutor, My LaTeX Experience on 3 November 08 at 4:29 pm I have long thought that math and science homework should emphasize depth and quality over quantity, especially once basic arithmetic-type skills have been established; generally somewhere in those middle school years.  I have recently been encouraging students, who I mentor, to create LaTeX templates and to use these to create quality papers for submission to their teachers.  physhwktemplate22 I generally find that once the template is created to fit a particular class, it does not take much longer to work homework sets directly in LaTeX than on paper, and the result is much more desirable. This particular homework example uses a LaTeX template  based on the MEMOIR document class and highlights the basic problem solving approach of clearly articulating the Given information, the thing one is supposed to Find, a Plan for finding it, the Calculations and finally a clear Solution statement which directly answers the question which was asked. Here is the /LaTeX code: % Physics Homework Template using MEMOIR class \documentclass[openany]{memoir} %preamble \usepackage{calc} \usepackage{color} \usepackage{graphicx} %define title and other basic document info %the title should reflect the style and give a foretaste of the document %work on making a stylized title page — or title on a page as in ARTICLE class \title{\huge \textbf{Ch 15 Recitation Problems}} \date{05 Nov 08}                    % could use \today  , but I like this date format better %\publisher{}                            %one day I’ll need this  😉 %\thanks{Special thanks to God for the ability to work}        %produces a footnote to the title \definecolor{ared}{rgb}{.647,.129,.149} \renewcommand\colorchapnum{\color{ared}} \renewcommand\colorchaptitle{\color{ared}} \chapterstyle{bringhurst} %one of a number of chapter styles available…this one doesn’t use the ared color %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %title \thispagestyle{empty} %\begin{minipage}{300pt} \begin{center}{ \hrule \vspace{30pt} \hspace{10pt} \thetitle  \vspace{30pt} \newline \theauthor \hspace{30pt} \thedate  \vspace{26pt} \hrule } \end{center} %\end{minipage} \clearpage %\frontmatter    %use if needed –page numbers as lower case roman numerals i, ii,… %\mainmatter %%other declarations \pagestyle{Ruled}                    %one of a number of possible page styles \midsloppy                             %to minimize overfull lines %Layout the page %%Try this manual golden ratio layout or…           default seems better for now %\settypeblocksize{*}{\lxvchars}{1.618} %\setulmargins{50pt}{*}{*} %\setlrmargins{*}{*}{1.618} %\semiisopage[12] %try this predefined layout — others predefined ones are options in MEMOIR… %this one looked best but did not work \checkandfixthelayout          %make the layout happen and provide details in log during build \chapter{Chapter 15 Recitation Problems} \section{Problem 3} \subsection{Given} A 6m x 12m swimming pool which slopes linearly from a 1.0m depth at one end to a 3.0m depth at the other end. \subsection{Find} The mass of water in the pool. $M_{H2O}$ \subsection{Plan} Simply find the volume of the pool and multiply by the density of water. $M_{H2O}=\rho \cdot V$\\ $V=A_{trapezoidal side}\times Width$ \subsection{Calculations} \begin{eqnarray*} V &=& \frac{3+1}{2} \times 12 \times 6\\ V &=& 24 \times 6 \\ V &=& 144 m^3 \\ M_{H2O} &=& 144 m^3 \times \frac{1000 kg}{m^3} \\ M_{H2O} &=& 144000 kg \end{eqnarray*} \subsection{Solution} \begin{minipage}{300pt} \begin{center}{ \hrule \vspace{20pt} The mass of water in the pool is $1.44 \times 10^5 kg$         %nicely written sentence solution goes here \vspace{16pt} \hrule } \end{center} \end{minipage} \section{Problem 5} \subsection{Given} The deepest point in the ocean is 11 km below sea level. \subsection{Find} The pressure in atmospheres at this depth. \subsection{Plan} The hydrostatic pressure at a depth, d is $P = P_o + \rho g d$. I just need take care with the units. \subsection{Calculations} $P = 1 atm + 1030 \frac{kg}{m^3} \times 9.8 \frac{N}{kg} \times 11000 m \times \frac{1 atm}{1.013 \times 10^5 Pa}$ \subsection{Solution} \begin{minipage}{300pt} \begin{center}{ \hrule \vspace{20pt} The pressure at 11 km below sea level is ~1097 atmospheres.   %nicely written sentence solution goes here \vspace{16pt} \hrule } \end{center} \end{minipage} \section{Problem 9} \subsection{Given} A submarine with a 20 cm diameter window which is 8.0 cm thick.  The manufacturer says it can stand forces up to $1.0 \times 10^6 N$. The pressure inside the submarine is maintained at 1.0 atm. \subsection{Find} The maximum safe depth for the submarine. \subsection{Plan} Since $P = \frac{F}{A}$, I can just find the depth at which the pressure will reach the manufacturers maximum force for the area of the given window.  Again, paying close attention to the units. The fact that the inside of the submarine is maintained at 1.0 atm allows me to use $P_{max} = \frac{F_{max}}{A}=\rho g d$ and solve for $d$. \subsection{Calculations} The area of the window is,\\ $\pi \times (0.10 m)^2 = \frac{\pi}{100} m^2$\\ So the maximum pressure for this window is, \\ $\frac{(1.0 \times 10^6 N)}{\frac{\Pi}{100} m^2} = 3.183099 \times 10^7 Pa$\\ The depth at which this pressure is reached is,\\ $\frac{3.183099 \times 10^7 Pa}{1030 \frac{kg}{m^3} \times 9.8 \frac{N}{kg}} = 3153.5 m$ \subsection{Solution} \begin{minipage}{300pt} \begin{center}{ \hrule \vspace{20pt} The submarine can descend to a depth of ~3150 meters.                %nicely written sentence solution goes here \vspace{16pt} \hrule } \end{center} \end{minipage} \end{document} 1. Thanks!!!! Like 2. […] – templates and examples – some examples – this article shortly describles how to do exercises following given-find-plan-calculations-… – “Hints, tips, and help for writing mathematics well” – “Creating […] Like 3. I have been looking for this simple command for many time and I have found it in your blog: \definecolor{shadecolor}{cmyk}{.3,.1,0,0} It works great!!!!!!!!!!!! Many thank’s Miquel Like
2017-08-24 04:48:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090997338294983, "perplexity": 3698.025743476156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133032.51/warc/CC-MAIN-20170824043524-20170824063524-00115.warc.gz"}
http://www.romhacking.net/forum/index.php?action=printpage;topic=27053.0
# Romhacking.net ## Romhacking => ROM Hacking Discussion => Topic started by: Chicken Knife on September 19, 2018, 08:25:57 am Title: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on September 19, 2018, 08:25:57 am So I've mostly finished my script revisions to Dragon Warrior 1 and I'm now looking at taking on the sequel. There isn't a dedicated table file posted, but when I plug in the table data from the first game, I'm able to read the text for spells, items and monster names in my hex editor. The curious thing is that the bulk of in game dialogue text does not show up in a readable form. Glancing over information for later Dragon Warrior games, it looks like text compression exists for the dialogue in the 4th game. Am I most likely facing a compression issue in DW2? If so, are there any good universal tools that would help me with the process of extracting, decoding, and reinserting the text? Title: Re: Handling possible text compression Post by: FCandChill on September 19, 2018, 05:43:03 pm Chances are, this game uses DTE compression. It's very common in NES games. Can you find data that sort of ressembles dialogue like "S.e.h prince." ("Save the princess" compressed)? Also, you might want to have a dedicated thread for your Dragon Quest questions ... just a suggestion. September 19, 2018, 05:43:36 pm - (Auto Merged - Double Posts are not allowed before 7 days.) As for tools, check here https://www.romhacking.net/?page=utilities&category=14&platform=&game=&author=&os=&level=&perpage=20&title=&desc=&utilsearch=Go Title: Re: Handling possible text compression Post by: Chicken Knife on September 19, 2018, 07:46:22 pm I've used the ROM map available on this page to determine the block of data where the dialogue is stored. As I scan through that data, it does not appear as an abbreviated form of actual text like your example of S.e.h prince. Instead, all the data in that section appears as 80-90 percent non letter entries, with occaisional letter entries interspersed. As far as your point on forum etiquette, it's a fair one. I know I've been throwing a lot of separate topics onto this board lately. I actually do have a dedicated Dragon Quest thread under personal projects. I figured that thread would attract primarily readers interested in my Dragon Quest project, but posting here would catch a wider audience of people willing to help with the numerous technical obstacles I face as a very new person to this process. Perhaps it would be better to have a unified thread here that isn't really about my project but serves to keep my large mess of questions in one place? Title: Re: Handling possible text compression Post by: FCandChill on September 19, 2018, 08:09:27 pm I've used the ROM map available on this page to determine the block of data where the dialogue is stored. You didn't link to anything in your post, but I assume you meant to link here: https://datacrystal.romhacking.net/wiki/Dragon_Warrior_II:ROM_map The info you referenced is under the section name Dragon Quest not Warrior. In other words ... the page covers the Japanese version, not the English version. That may be your issue. Title: Re: Handling possible text compression Post by: Chicken Knife on September 19, 2018, 08:54:53 pm Fair point, but I have used a hex editor to carefully scroll through all the data in the rom, several times actually. On each search I come across the full list of items, monsters and spell names. The only other text I come across is the prologue text added for the US version intro. The dialogue text is no where to be found. When I go through DW1 or even DW3, I easily locate the large block of general dialogue text, along with the items, monster names and spells. Title: Re: Handling possible text compression Post by: Psyklax on September 20, 2018, 02:00:55 am I second the idea of a dedicated thread for all this: no need to make lots of new threads. :) As for the text compression question, if I can find the time I can have a look myself. I know for a fact that Dragon Warrior doesn't have compression but I can't remember about DW2. I'd be a bit surprised if it did because neither it nor the first game have an overwhelming amount of text. Maybe later today I'll be able to spend five minutes checking the text to see how it's stored in the ROM. Compression does happen in text heavy games - my efforts translating Time Stranger from 1986 found a neat dictionary compression scheme - but again, I'd be a little surprised were that the case here. Title: Re: Handling possible text compression Post by: Chicken Knife on September 20, 2018, 08:58:54 am Sounds great Psyklax. Once this issue gets figured out I'll change the title of this post to something generalized and put all my technical questions here going forward. Going back to the issue at hand, I also found it strange that the game would use compression--especially since there is a ton of empty space in the US rom. I initially thought this might be an issue of the game having the alphabet under multiple table entries. To test that, I went into FCEUX Nametable Viewer and checked the hex code of the letters showing up in dialogue text. The hex code for the letters in dialogue was exactly the same as the code for the letters in monster, item and spell names--with all of the latter showing up in a hex editor when I load up table data. Would this indicate compression or could another factor be causing this? I appreciate you taking a look at the rom! Title: Re: Handling possible text compression Post by: Psyklax on September 20, 2018, 10:43:22 am I went into FCEUX Nametable Viewer and checked the hex code of the letters showing up in dialogue text. The hex code for the letters in dialogue was exactly the same as the code for the letters in monster, item and spell names--with all of the latter showing up in a hex editor when I load up table data. Would this indicate compression or could another factor be causing this? I don't think simply looking at the nametable would indicate compression per se. The way I would do it is the more advanced method of debugging the game to see how the text gets from the ROM to the screen, and compression will become apparent then. A simpler way is the old fashioned relative search technique locating where the text is, and discovering the compression then. I'll have a look later, anyway. I've already hacked this game to double experience and gold, so that's why I'm familiar with it. Title: Re: Handling possible text compression Post by: KingMike on September 20, 2018, 12:50:32 pm I know Dragon Warrior II and III used ROM expansion so I wouldn't expect something beyond dictionary, because what's the point once they're expanding the ROM. Yes, in DW4 they used Huffman which is tricky but that's because they couldn't really expand further. SOROM or whatever it was (the 512KB MMC1 mapper) was already about the limit. Title: Re: Handling possible text compression Post by: Psyklax on September 20, 2018, 06:00:12 pm Well, you were right: DW2 uses a rather tricky compression method which I don't fully understand yet. I'm still looking at it, and it's not like what I've seen before. I'd say it's a bit like dictionary compression, except not... I'll continue until I've figured it out, but it's weird. Title: Re: Handling possible text compression Post by: Chicken Knife on September 20, 2018, 07:40:20 pm Thank you so much for diving into this Psyklax. Why did those knuckleheads at Nintendo of America feel the need to give us this headache with all that free space at their disposal.  :banghead: Title: Re: Handling possible text compression Post by: Alchemic on September 21, 2018, 01:36:17 am I did some digging around in DW2 a few years back. My notes are kind of a mess, but to summarize: • Text is built up out of pieces that are five or ten bits in length. • The underlying characters for each piece are at 0xB49B - 0xB686. Abbreviated TBL below. • The piece lengths are at 0xB44B - 0xB49A. One nybble per piece. • You can think of the pieces as one table with 160 entries, or five tables with 32 entries each. • The 1st table handles bit patterns 00000 through 11011. • The 2nd table handles 11100 xxxxx, the 3rd 11101 xxxxx, the 4th 11110 xxxxx, the 5th 11111 xxxxx. • The text pointers are at 0xB762 - 0xB7C1. • Each pointer points at a blob of 16 concatenated strings. • Compressed text is at 0x14010 - 0x17FE6, and a bit more at 0xB7C2 to 0xBE0F. Code: [Select] TBL for 0xB49B: 00-09 = 0-9 0A-23 = a-z 24-3D = A-Z 3E-57 = a-z again 58=A 59=(space) 5A=[SUN] 5B=[STAR] 5C=[MOON] 5D=[DROP] 5E=[HEART] 5F=(space) 60=(space) 61=' 62=" 63=-> 64=" 65=' 66=' 67=' 68=.' 69=, 6A=- 6B=. 6C=& 6D=(space) 6E=? 6F=! 70=; Text-dumping code in Python (https://github.com/Osteoclave/game-tools/blob/master/nes/dragonwarrior2_textdump.py) Title: Re: Handling possible text compression Post by: Psyklax on September 21, 2018, 03:56:14 am Wow, Alchemic, that confirms what I saw: that it's pretty messed up. :D I got as far as noticing that each individual word is assembled letter-by-letter in SRAM before being passed off to the PPU, but it would've taken me time to get to the level of detail you've acquired. That has to be the most convoluted text handling scheme I've ever seen in a game. I had a look so I can see what you're talking about there. It's still messing with my brain, so I think I'll leave it for now. :) Title: Re: Handling possible text compression Post by: abw on September 21, 2018, 08:57:00 am Maybe later today I'll be able to spend five minutes checking the text to see how it's stored in the ROM. If you actually had cracked that code in 5 minutes, you would have had my vote for the annual "God of ROMHacking" award :P. I did some digging around in DW2 a few years back. My notes are kind of a mess, but to summarize: [...] Text-dumping code in Python (https://github.com/Osteoclave/game-tools/blob/master/nes/dragonwarrior2_textdump.py) Nice work Alchemic! I'm sure somebody's next question is going to be: did you also happen to write an insertion script? That has to be the most convoluted text handling scheme I've ever seen in a game. The basic input -> output encoding is pretty simple, but I definitely agree that DW2's text engine could have been implemented in a much less complicated way. It took me a couple of days to work through it enough to get a decent script dump, but I can confirm that Alchemic's notes are indeed accurate. I also found it strange that the game would use compression--especially since there is a ton of empty space in the US rom. Why did those knuckleheads at Nintendo of America feel the need to give us this headache with all that free space at their disposal.  :banghead: It's not always a question of how much free space in total, but where that free space is located. DW2's script is large enough that even with the compression scheme they added, it still doesn't fit inside a single PRG bank, which is why the script jumps from 0x17FE6 to 0xB7C2. Maybe they were trying to keep almost all of the text-related code and data confined to 2 PRG banks? Title: Re: Handling possible text compression Post by: Chicken Knife on September 24, 2018, 07:04:44 am @abw got it right. I'm definitely asking for someone to help me write a script for this. Wouldn't I need two scripts: one to uncompress the text into readable / editable form and another to recompress for reinsertion? I'm willing to keep the maximum length of each line the same if it means not having to worry about readjusting all the pointers. In the mean time, I am keeping myself busy figuring out the best way to edit the monster, spell and item lists but I would be so very grateful to have a means of moving forward with the script one day soon. Title: Re: Handling possible text compression Post by: abw on September 24, 2018, 09:57:45 pm @abw got it right. I'm definitely asking for someone to help me write a script for this. Wouldn't I need two scripts: one to uncompress the text into readable / editable form and another to recompress for reinsertion? I'm willing to keep the maximum length of each line the same if it means not having to worry about readjusting all the pointers. If all else fails, abcde (http://www.romhacking.net/utilities/1392/) can definitely handle both extracting and inserting text like this along with updating the pointers, and you'll have all the original space at your disposal (there's no free space conveniently available after the original script ends, since the very next byte is code, so if you need more space than the original, it'll take a bit more work). As an added bonus, thanks to the original encoding being somewhat inefficient, abcde can actually re-insert the original script using $79 fewer bytes. Title: Re: Handling possible text compression Post by: Chicken Knife on September 25, 2018, 08:42:31 am @abw Am I to understand that your software can already handle unwinding the existing compression routine in DW2 without the need for a special script being written? Also, if I am to use the more efficient form of compression available through your software, it will automatically rework the code of the game to be able to read the new compression routine? Something tells me it will be nowhere near this easy. :laugh: Title: Re: Handling possible text compression Post by: abw on September 25, 2018, 06:10:59 pm Am I to understand that your software can already handle unwinding the existing compression routine in DW2 without the need for a special script being written? [...] Something tells me it will be nowhere near this easy. :laugh: I'm not going to make any promises about how easy using abcde will be for anybody else, but... basically, yeah. Like I said earlier, the text encoding itself is actually pretty simple, it's just that DW2's code for parsing that encoding is fairly unpleasant. Give abcde the right table file, extract command file, and ROM, and watch it go. Stick a couple of extra lines in the script dump file it generates to turn it into a valid insert command file, and then run abcde again with the same table file, the insert command file (edited to have your new text, if so desired), and (a copy of) the ROM. That said, unless you start making ASM changes, you're still constrained by the total available space and text engine and stuff, so it's not a magic bullet. Also, if I am to use the more efficient form of compression available through your software, it will automatically rework the code of the game to be able to read the new compression routine? Haha, don't I wish it was that good :P. abcde doesn't make any changes to the game's code, it just uses a better insertion algorithm. The original data compression was sub-optimal; as one example, the game stores the string "'Welcome " as ['W][e][l][come][ ], with those 5 pieces taking up 10 + 5 + 5 + 10 + 5 = 35 bits. On the other hand, abcde is smarter and inserts the same string as ['][Welcome ] in 5 + 10 = 15 bits, for a savings of 20 bits. Little improvements like that that save 5 bits here, 10 bits there over the course of the entire script is how we end up with 121 bytes of reclaimed space. Title: Re: General Hacking Questions Post by: Chicken Knife on November 29, 2018, 08:09:01 am I thought I'd bring this thread back to life as I'm back into my projects consistently and have a few things I need help with. I'm nearly done with my Dragon Warrior 1 hack and my only issue left is that I want to fix some compromises I had to make porting over NPC sprite graphics from the famicom version due to left/right mirroring instructions in the DW1 for certain sprite tiles. Does anyone have any good reading material to recommend that will present in a way that does a good job explaining itself to a non coder that will help me understand "sprite pointers" (if I'm using the right term) and will help me understand how to either disable the LR mirroring instructions I’m struggling with or simply redirect the game to alternative tiles for that sprite? If I can "repoint" them effectively like I do with text, I will have a ton of extra space since I turned all the 4 direction sprites into 1 direction sprites. There are also other topics where I would like to do some reading and am having a hard time because I either can't find the resources or the resources I find are too technical: I encountered a multiplier formula in Dragon Warrior 3 where the game takes the base Japanese gold and experience numbers saved for monsters saved in the IS game but applies a .25% boost, rounded up, to those values for what you actually earn in battle. I want to locate and disable that multiplier, restoring the straight Japanese values in gameplay. I assume a debugger would help me here. If so, any good reading material (or better yet videos) you guys would recommend for effectively using the FCEUX debugger for something like this? Last, I've tried to play around with Atlas / Cartographer, ABCDE software in hopes of solving the DW2 script compression issue. The faq that comes with Atlas is too technical for me at this point and I haven't found much else out there. Any recommended reading? Thanks guys! Title: Re: General Hacking Questions Post by: abw on November 29, 2018, 08:34:16 pm Does anyone have any good reading material to recommend that will present in a way that does a good job explaining itself to a non coder that will help me understand "sprite pointers" (if I'm using the right term) and will help me understand how to either disable the LR mirroring instructions I’m struggling with or simply redirect the game to alternative tiles for that sprite? Maybe I shouldn't comment since I don't usually do much graphics work, but when I do, I usually end up consulting Y0SHi's NES Documentation (http://www.romhacking.net/documents/120/), and although I'm pretty sure I originally had to read it a few times before it sunk in, it hasn't steered me wrong yet and it does have a couple of sections that talk about sprites. Last, I've tried to play around with Atlas / Cartographer, ABCDE software in hopes of solving the DW2 script compression issue. The faq that comes with Atlas is too technical for me at this point and I haven't found much else out there. Any recommended reading? Did you try abcde's readme or examples? Be honest - how bad are they really? I'm waaaay too deeply immersed in the source material to feel confident about my ability to pitch the documentation at a level that is useful without making other people barf, plus I implicitly assume that the end user is already familiar with Atlas/Cartographer :(. I might have some recommended reading for you, though (based on abcde v0.0.2; provide your own "Dragon Warrior II (U) [!].nes") ;D: Extraction: Quote from: Cartographer.txt #GAME NAME: Dragon Warrior II (U) [!].nes #BLOCK NAME: Main Script, Part 1 #TYPE: NORMAL #METHOD: POINTER_RELATIVE #POINTER ENDIAN: LITTLE #POINTER TABLE START:$B762 #POINTER TABLE STOP:   $B7BD #POINTER SIZE:$02 #POINTER SPACE:      $00 #STRINGS PER POINTER: 16 #AUTO JUMP START:$17FE7 #AUTO JUMP STOP:   $B7C2 #ATLAS PTRS: Yes #BASE POINTER:$C010 #TABLE:         dw2_script.tbl #END BLOCK #BLOCK NAME:      Main Script, Part 2 #TYPE:         NORMAL #METHOD:      POINTER_RELATIVE #POINTER ENDIAN:   LITTLE #POINTER TABLE START:   $B7BE #POINTER TABLE STOP:$B7C1 #POINTER SIZE:      $02 #POINTER SPACE:$00 #STRINGS PER POINTER:   16 #ATLAS PTRS:      Yes #BASE POINTER:      $10 #TABLE: dw2_script.tbl #COMMENTS: Yes #END BLOCK Quote from: dw2_script.tbl # NB: I didn't put much time into figuring out the control codes, so some of them are still unknown and some of them might be wrong. Caveat emptor! /%00000=[end]\n\n /%00001=.[end]\n\n /%00010=?[’ ][FD][FD][end]\n\n /%00011=[.’][end]\n\n %00100=[FF] %00101=y %00110=c %00111=o %01000=d %01001=e %01010=f %01011=g %01100=h %01101=i %01110=j %01111= %10000=l %10001=m %10010=n %10011=[line]\n %10100=[.’] %10101=[ ‘] %10110=r %10111=s %11000=t %11001=u %11010=a %11011=w %11100=C0 %11101=C1 %11110=C2 %11111=C3 %1110000000=A %1110000001=B %1110000010=Ca %1110000011=D %1110000100=E %1110000101=F %1110000110=G %1110000111=H %1110001000=I %1110001001=J %1110001010=King %1110001011=L %1110001100=Moonbrooke %1110001101=N %1110001110=O %1110001111=[item] %1110010000=The %1110010001=Rhone %1110010010=S %1110010011=; %1110010100=U %1110010101=” %1110010110=Water Flying Cl %1110010111=C %1110011000=Y %1110011001=Z %1110011010=x %1110011011=Village %1110011100=z %1110011101=[F9] %1110011110=‟ %1110011111=K %1110100000=v %1110100001=q %1110100010=[’ ][wait][line]\n %1110100011=R %1110100100=. %1110100101=[FD][FD] %1110100110=P %1110100111=b %1110101000=T %1110101001=! %1110101010=[sun] %1110101011=[star] %1110101100=[moon] %1110101101=W %1110101110=k %1110101111=p %1110110000=? %1110110001=, %1110110010=[monster] %1110110011=.... %1110110100=: %1110110101=[’ ] %1110110110=- %1110110111=[’ ] %1110111000=[spell] %1110111001=[letter] %1110111010=[no voice] %1110111011=[wait] %1110111100=M %1110111101=[name] %1110111110=[number] %1110111111=[FD] %1111000000=Thou hast %1111000001=hest %1111000010=Midenhall %1111000011=hou %1111000100= of %1111000101= is %1111000110= thou has %1111000111= and %1111001000=to th %1111001001= thee %1111001010=ast %1111001011= do %1111001100=hat %1111001101= shall %1111001110= was %1111001111=hou has %1111010000=d the %1111010001= has %1111010010=gon %1111010011=.[wait][line]\n %1111010100= have %1111010101=come to %1111010110=ing %1111010111= hast %1111011000=ost thou %1111011001=this %1111011010= of the %1111011011=Hargon %1111011100=in the %1111011101=thing %1111011110=he %1111011111= with %1111100000=reasure %1111100001=[ ‘]Hast %1111100010=Erdrick %1111100011=come %1111100100=ere is %1111100101=Welcome %1111100110=rince %1111100111= great %1111101000=arr %1111101001= for th %1111101010=piece[(s)] of gold %1111101011=[.’][wait][line]\n %1111101100=But %1111101101=here %1111101110=can %1111101111=ove %1111110000=hee %1111110001=not %1111110010=for %1111110011=one %1111110100= any %1111110101= to %1111110110=descendant %1111110111=Roge Fastfinger %1111111000=all %1111111001=thy %1111111010=[ ‘]W %1111111011=thank thee %1111111100= it %1111111101= tha %1111111110= thou %1111111111= the Usage (replace \path\to\abcde\abcde.pl as appropriate): Code: [Select] perl \path\to\abcde\abcde.pl -m bin2text -cm abcde::Cartographer "Dragon Warrior II (U) [!].nes" Cartographer.txt Cartographer_out -s Insertion: Quote from: Atlas.txt // Define, load, and activate a TABLE #VAR(Table, TABLE) #ADDTBL("dw2_script.tbl", Table) #ACTIVETBL(Table) // Jump to start of script #JMP($14010) #HDR($C010) // auto-commands for when DW2 does a mid-string bankswap and resets its read address: #AUTOCMD($17FE7, #HDR($10)) #AUTOCMD($17FE7, #JMP($B7C2,$BE0F)) // the rest of Cartographer_out.txt goes here Quote from: insert.bat @copy /Y "Dragon Warrior II (U) [!].nes" "Dragon Warrior II (U) [new].nes" > nul perl \path\to\abcde\abcde.pl -m text2bin -cm abcde::Atlas "Dragon Warrior II (U) [new].nes" Atlas.txt pause Title: Re: General NES Hacking Questions Post by: Chicken Knife on November 30, 2018, 02:54:28 am @ abw Wow! Talk about a silver platter  :beer: I've done all the steps you've outlined but I'm getting this message after inputting: (from within abcde.pl's directory) perl abcde.pl - bin2text -cm abcde::Cartographer "Dragon Warrior II - Edit.nes" Cartographer.txt Cartographer_out -s Message after inputting: error UTF-8 "\x92" does not map to Unicode at C:/Perl164/lib/Encode.pm line 228, <TABLE> line 3 when reading C:\Rom Editing\dw2_script.tbl line 3 I don't think me using the altered name of my hacked rom file would matter. Let me know if it does. As far as Perl, I downloaded the software today in order to get this going. ActivePerl 5.26.1.2601 MS Win32-x64-404865 is the version. PS As far as Yoshi's NES doc, I think it's written far more for someone like you than someone like me. The density of it loses me in the same way as most of what I try to read on the NES Dev site. Truth be told, your abcde faq is much easier to follow, though I'm far from understanding all of it and it definitely does assume I understand Atlas / Cartographer. As far as the Yoshi faq, I'll have to keep coming back to this and whatever else I find until I figure out how to do the graphical work I need to. Title: Re: General NES Hacking Questions Post by: abw on November 30, 2018, 08:36:29 am @ abw Wow! Talk about a silver platter  :beer: Heh, I had it handy from earlier in the thread. I used that stuff to make the King of Moonbrooke say "Oh noes, not the flying purple people eaters!" during the intro scene in order to test inserts :P. error UTF-8 "\x92" does not map to Unicode at C:/Perl164/lib/Encode.pm line 228, <TABLE> line 3 when reading C:\Rom Editing\dw2_script.tbl line 3 Ah, sorry, I forgot to mention that you'll need to save the files (particularly the table file) encoded as UTF-8. Any modern text editor should give you that option when you save a file; even Notepad can save as UTF-8! As far as Yoshi's NES doc, I think it's written far more for someone like you than someone like me. The density of it loses me in the same way as most of what I try to read on the NES Dev site. Yeah, there's definitely a lot to take in, but knowing what's going on at the hardware level, what the different sections of PPU RAM control, and what all those writes to $20xx are doing can be pretty helpful when you're trying to track down some graphics logic. Truth be told, your abcde faq is much easier to follow Huzzah, mission partially accomplished! Title: Re: General NES Hacking Questions Post by: Chicken Knife on November 30, 2018, 07:33:03 pm @ abw Worked like a charm. :woot!: I assume that a similar approach to how I edit text in hex editors would serve me well. I plan on working within the length of the existing strings, using blank entries at the end of lines of text if I happen to shorten in order to keep the length exactly the same. The thing that concerns me is the presence of so many control codes in DW2. As I go through the extraction document, I see the presence of plenty of those words in the script, but I don't see any indication that those words are produced by a control code in the document. I can keep Roge Fastfinger in mind easily enough, but with a ton of other common control code words like he, with, can, one, any etc. there is no way I'm going to be able to keep track of them all. Also, how do I know that the game consistently used control codes for those words instead of using them sporadically? It seems like it will be inevitable that I'll end up with a different data size and a considerable headache. Perhaps I could use the find command to highlight every instance of them in the text before I start editing in order to be wary of carefully maintaining string length. Any advice would be appreciated. Also, going back again to Roge Fastfinger, I will be reverting his name to the Japanese Lagos. Is there any simple way to edit the control code words so I could swap out the name and use the same code? If not, I suppose I'd have to seriously abbreviate those lines to compensate. Also, when reinsertion time arrives, does your system automatically note every control code word and insert them back into the game as the single byte control code instead of individual letter data? I'm curious. Last question: I've heard on occasion that this kind of software automatically readjusts the text pointers and I could therefore enlarge some strings of text as long as the total amount of data is the same upon insertion. That seems too good to be true. Could it be? Let me know your thoughts. I'm sure there is a lot I'm conceptually missing here. Title: Re: General NES Hacking Questions Post by: abw on December 01, 2018, 12:13:49 am Worked like a charm. :woot!: Whoo-hoo! I assume that a similar approach to how I edit text in hex editors would serve me well. I plan on working within the length of the existing strings, using blank entries at the end of lines of text if I happen to shorten in order to keep the length exactly the same. It seems like it will be inevitable that I'll end up with a different data size and a considerable headache. *shudders* Nope, nope, and... wait for it... nope :P. The fastest way is probably just to try it and see, but basically when you use a (decent) script insertion utility, you don't have to worry too much about things like the binary it inserts or updating pointers, since the utility takes care of all of that for you. Like I mentioned earlier, you're still constrained by the total available space and text engine and stuff like that, but as long as you don't try to insert impossible (combinations of) characters or too much text, you can do more-or-less whatever you want. Line lengths will change; that's okay and you don't need to care about it. Some text will get encoded as single characters, some as multiple characters; that's okay and you don't need to care about it. String addresses will change and pointers to those strings will get updated; that's okay and you don't need to care about it. Last question: I've heard on occasion that this kind of software automatically readjusts the text pointers and I could therefore enlarge some strings of text as long as the total amount of data is the same upon insertion. That seems too good to be true. Could it be? Automatically recalculating pointer values is just one of the many benefits of using a script insertion utility over hex editing :). Also, going back again to Roge Fastfinger, I will be reverting his name to the Japanese Lagos. Is there any simple way to edit the control code words so I could swap out the name and use the same code? Yup. It's not quite as simple for DW2 as it would be for a game with a less ornery text engine, but it's still not too hard. The dictionary lives at ROM 0xB44B-0xB686, with each nybble (1 nybble = half a byte) of 0xB44B-0xB49A giving the length of the corresponding dictionary entry in 0xB49B-0xB686. So you could change "Roge Fastfinger" to "Lagos" at 0xB655, shift the rest of 0xB664-0xB686 up to just after the end of "Lagos", and then change the length of the "Roge Fastfinger" dictionary entry from 15 to 5 at 0xB496 (i.e. 0xAF -> 0xA5), leaving you with 10 bytes of unused space at 0xB67D-0xB686. Just make sure to update your table file to match whatever changes you make to the game's text encoding, or you're going to get a nasty surprise when you try extracting or inserting text! Also, when reinsertion time arrives, does your system automatically note every control code word and insert them back into the game as the single byte control code instead of individual letter data? I'm curious. Based on the table file(s) you provide, abcde is aware of all the possible ways of translating your text into the binary that the game needs, and it will convert your text into the shortest possible binary that represents your text. How? Magic :angel:. Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 02, 2018, 04:10:31 pm @abw Ok, I worked furiously throughout the weekend on my script. Having just completed it, I ran the insertion routine, loaded up the game, and found the text to be totally mangled. Everyone is saying something different than what they should be saying. I don't think my data overran the alotted space. In general a direct translation of the famicom is shorter than the US version. I assume I would have gotten some kind of error message if it did. The other possibility in my mind is that I was supposed to totally sort out the control codes prior reworking the script, which I did not do. Is that what you would suspect is the issue or do you have another idea? I have a frightening suspicion that if control codes are the issue, I would have saved myself tremendous work prior to extraction and reworking the script. Also, there were some instructions in the Atlas data that you quoted me where I wasn't sure if they were written to me or the program. For example: // auto-commands for when DW2 does a mid-string bankswap and resets its read address: PS I don't know if this helps, but all of the dialogue from everyone seems to be various battle commands *EDIT If it helps at all, this is the Atlas file where I tried removing what looked like your instructions from me and added my game script underneath. Take a look. https://www.dropbox.com/s/t06zcf8qmmfhsno/Atlas.txt?dl=0 Title: Re: General NES Hacking Questions Post by: abw on December 02, 2018, 10:52:18 pm Ha, that's me skipping over important little details again :-[ (in my defence, they are documented). In my extract script, I set #COMMENTS: Yes, which makes every line of the script be output as a comment and thus ignored during insertion. So, since all the text is a comment, the only thing that actually changed would have been the pointers, and every pointer in the first block of pointers would point to the same first string starting at 0x14010, which happens to contain battle text. The second block of pointers would also point to the same RAM address ($8000), but with the wrong ROM bank visible in RAM, so I imagine any attempt to display one of those strings would end badly. If you're only making a few changes to the script, you might prefer setting #COMMENTS: No instead and editing the text that way, or reformat the script dump using your text editor of choice (when translating, I like keeping the original text for reference and adding my new text just below); a simple regular expression search-and-replace operation or two should do the trick, or in the worst case you could manually delete the // before each line of script. Also, there were some instructions in the Atlas data that you quoted me where I wasn't sure if they were written to me or the program. For example: // auto-commands for when DW2 does a mid-string bankswap and resets its read address: Any line in the extract or insert command files that starts with // is treated as a comment, so the ones I added are just notes to explain what's going on. In this case, DW2 has some code that says "when the current read address reaches $BFD7, swap in a different bank and keep reading from$B7B2", so the #AUTOCMDs are there to handle that during insertion. Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 02, 2018, 11:33:46 pm @abw As far as leaving out that bit, you are my patron saint of hacking. No grievance shall ever be held against you.  :laugh: Ok. So if comments would have been set to No, then all the lines of text would have inserted as intended I assume. If I understand correctly, leaving comments set to Yes would be convenient because I could write the legitimate lines of text underneath them without the // and insert the data back in the rom effectively even with the old script appearing as comments. So in my case, I took the output document and totally changed 98 percent of the text based on various resources to reflect a literal but fluid version of the Japanese text in contemporary English. I haven't slept much lately--and my wife may want to divorce me but that's neither here nor there. :D So what would you suggest I do now? I often don't mind doing things the slow and painful way. Lifelong RPG gamer after all  :laugh: I'm thinking to go through the document and delete every instance of the characters // and leave whatever text comes after them. Would that do the job for reinsertion or do I have to add anything in their place? There is no way I'm starting over with a new extraction. Somehow or another, this document has to get formatted correctly. December 04, 2018, 02:07:08 am - (Auto Merged - Double Posts are not allowed before 7 days.) Ok. That was a question I could really answer myself through a little experimentation. I ran another extraction and just confirmed which // characters to remove. Having done so, I reinserted and about 3/4 of my text was inserted correctly! I'll consider that a small victory haha. The other 1/4 text is either the games original text or bugged. The error message I got in command prompt when I did the insertion was: unable to tokenize; best attempt failed at input position 513 at C:\Rom Editing/abcde/Table/Table.pm Line 421 <COMMAND_FILE> line 1707. in text string starting at Atlas.txt line 1665. I immediately thought this was an issue with my insert text. I went to that line (with word wrap turned off) and found there is nothing unusual at all about it. Line 1665 reads: [ ‘]This is a sewing shop[.’][line] Title: Re: General NES Hacking Questions Post by: abw on December 04, 2018, 08:03:38 pm As far as leaving out that bit, you are my patron saint of hacking. No grievance shall ever be held against you.  :laugh: Heh, be careful what you promise >:D I haven't slept much lately--and my wife may want to divorce me but that's neither here nor there. :D I'm not going to admit to knowing what that feels like :D. I took a closer look at this and noticed a couple of things: - The dictionary does not include uppercase Q, V, or X as displayable characters, so you can't use those anywhere in your script without changing the dictionary. That unreadable mess of garbage just before the final error message abcde gave is supposed to show you how far along abcde got in the problem string before it couldn't go any farther, and as it happens 514 characters into the string starting at line 1665 is a Q. (yeah, the error messages suck) - Some of the dictionary entries display different visually identical tiles, but I made them the same in my table file. In my translation of DW1, I originally made the mistake of assuming that just because the graphics were identical, it didn't matter which of the different tile IDs I used; this led to a moment of intense dismay when I realized that I had managed to accidentally disable the "speech" text printing sound effect! - You did indeed make a lot of changes - replacing all the "thee"s and "thou"s and changing some proper nouns meant that a lot of the original dictionary entries went unused during insertion, which resulted in your new script not compressing small enough to fit in the available space. A few ad hoc tweaks to the dictionary was enough to sort that out and leave you with 218 bytes to spare; if you end up needing more room, updating the dictionary based on a full frequency analysis should give you much better compression results than me just eyeballing things. Here's a copy of the updated insert script (https://drive.google.com/open?id=1rrX177cE49fE2R0CYk9CXhYgGJqmNgQZ) and table file (https://drive.google.com/open?id=1MnB--MNEd4BG_IOxt4qZcwq3y6kigmMr) I used. I took the liberty of merging the original text into your new text as comments and fixing up a couple of typos that were breaking insertion; spell checking the new script is your responsibility :P. For the table file changes, you'll need to make the corresponding changes to the dictionary like I described a couple of posts ago (http://www.romhacking.net/forum/index.php?topic=27053.msg366909#msg366909). Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 05, 2018, 06:29:19 am @abw, you didn't need to do all this work for me! I'm extremely grateful but I really don't want to be a burden here (Although I imagine watching me flop around like a fish out of water can be equally burdensome) I think you've given me everything I need to do this insertion and further improvements to my script--except for one thing that I need clarification for. In order to update the rom's dictionary with the new words and nybble lengths as you instructed above, I need you to clarify how I actually locate where to plug in the nybble that corresponds to each dictionary entry. Either I missed it or you didn't provide the detail as to how you knew 0x0B496 corresponded to the entry for Roge Fastfinger / Lagos. I'm not sure if this uses a counting system like the item/monster/spell lists. Hopefully not since that would give me a big headache as the entries here seem all mashed up together. Title: Re: General NES Hacking Questions Post by: abw on December 05, 2018, 07:35:57 pm @abw, you didn't need to do all this work for me! I'm extremely grateful but I really don't want to be a burden here (Although I imagine watching me flop around like a fish out of water can be equally burdensome) If it makes you feel any better, only a little bit of that work was purely for you ;). I'm slowly puttering away at my own translation whenever the mood strikes me, so a lot of the issues you're encountering now are issues I would no doubt encounter myself at a later date. Might as well deal with them now so that we can both benefit (and judging from the read count on this topic, I suspect it's more than just you and I that are interested in this). I think you've given me everything I need to do this insertion and further improvements to my script--except for one thing that I need clarification for. The lengths of the dictionary entries are stored in nybbles at 0xB44B-0xB49A and the values are stored in bytes at 0xB49B-0xB686; the table file I provided lists the dictionary entries in the same order they appear in ROM, and the table file you were using to edit the item/monster/spell lists will also let you see the dictionary values correctly. An example from partway through the dictionary might be helpful to see what's going on: Code: [Select] ... 0x00B45B|$02:$B44B:11                            ; length of "A", length of "B" 0x00B45C|$02:$B44C:21                            ; length of "Ca", length of "D" 0x00B45D|$02:$B44D:11                            ; length of "E", length of "F" 0x00B45E|$02:$B44E:11                            ; length of "G", length of "H" 0x00B45F|$02:$B44F:11                            ; length of "I", length of "J" 0x00B460|$02:$B450:41                            ; length of "King", length of "L" 0x00B461|$02:$B451:A1                            ; length of "Moonbrooke", length of "N" ... 0x00B4C1|$02:$B4B1:24                            ; A 0x00B4C2|$02:$B4B2:25                            ; B 0x00B4C3|$02:$B4B3:26 0A                         ; Ca 0x00B4C5|$02:$B4B5:27                            ; D 0x00B4C6|$02:$B4B6:28                            ; E 0x00B4C7|$02:$B4B7:29                            ; F 0x00B4C8|$02:$B4B8:2A                            ; G 0x00B4C9|$02:$B4B9:2B                            ; H 0x00B4CA|$02:$B4BA:2C                            ; I 0x00B4CB|$02:$B4BB:2D                            ; J 0x00B4CC|$02:$B4BC:2E 12 17 10                   ; King 0x00B4D0|$02:$B4C0:2F                            ; L 0x00B4D1|$02:$B4C1:30 18 18 17 0B 1B 18 18 14 0E ; Moonbrooke 0x00B4DB|$02:$B4CB:31                            ; N ... So if you wanted to, say, change the dictionary entry for King to say Queen instead, you'd change the length stored at 0xB460 from 41 to 51, change King to Queen starting at 0xB4CC, and shift everything from 0xB4D0 down by one byte. The dictionary has a fixed length and is followed immediately by code though, so in a case like this where you added a byte, you would need to remove a byte from somewhere else in the dictionary (pick "Water Flying Cl", it's only used 4 times in the entire script!). Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 05, 2018, 08:04:54 pm @abw knowing that you tend to respond around this time I was in a big rush to message that I actually figured this out myself (well not really but still), finding that it was indeed a similar counting mechanism and making a little chart for myself just to make sure I understood it. Exhibit A: the little chart I was working on. hahaha B490   piece(s) of gold   ??? B491   But[ ]         here B492   can[ ]         ove B493   hee         not B494   for         one B495   [ ]any         [ ]to[ ] B496   Descendant       Lagos B497    all         thy B498   ?W         thank thee B499   [ ]it         [ ]thou B49A   [  ]the.      4 December 05, 2018, 08:07:19 pm - (Auto Merged - Double Posts are not allowed before 7 days.) Now that I get the mechanics of this, I'm going to optimize the dictionary entries to match up to my translation trends as much as possible, which should allow me to expand text a little more as I go through the revision process should I desire. I'm interested in what you are planning translation wise--more Latin for DW2? Also, now that we are discussing your translation, I'm curious what your textual basis was: the original US English translation or the Japanese? Title: Re: General NES Hacking Questions Post by: abw on December 07, 2018, 11:08:02 pm Exhibit A: the little chart I was working on. hahaha Haha, I see that's a nice little chart you have there :thumbsup:. I'm interested in what you are planning translation wise--more Latin for DW2? Also, now that we are discussing your translation, I'm curious what your textual basis was: the original US English translation or the Japanese? Yup. It's a dirty job, but somebody's got to do it :P. I generally use the English version of whatever I'm translating as my base, but will consult other English versions or other languages if the need arises; this happens most often for made-up enemy/item names, but sometimes I come across a line or two of dialogue that I like to clarify because it's a little too vague or allows for multiple valid interpretations. Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 08, 2018, 11:13:07 pm Ok, I have a report that is pretty darned good. While I was initially having some problems achieving a successful insertion with the updated Atlas.txt and table file you provided, I was able to narrow down the issue to one small mistake I made as I heavily edited the dictionary where a character length was off by one. Once I fixed that, I was able to do an insert that is 99% bug free (at least according to the appx 30% of game text that I've tested so far.) Here is the bug that's jumping out at me. The King of Cannock/Samaltria has some bugged text prior to his normal text. See the two pics below. The text seems to be from Hargon's Temple after you use the Charm/Eye of Rubiss to dispel the illusion https://www.dropbox.com/s/txzrjqisvzwumuc/Dragon%20Warrior%20II%20-%20Edit_002.png?dl=0 https://www.dropbox.com/s/syx9sowi42u3j03/Dragon%20Warrior%20II%20-%20Edit_003.png?dl=0 After those lines, again he returns to his normal text, allows you to save, and remarks about his son. Very peculiar and I hope you have an idea how to fix it because I have no idea. Let's see if I find anything else like this, but I'm relieved that everything else has pointed to the right text so far. Here are links to my relevant files if you want to look at them regarding this issue. FYI my Atlas.txt script is significantly revised since I last sent it. https://www.dropbox.com/s/5qo09huap2a5y2t/dw2_script.tbl?dl=0 https://www.dropbox.com/s/yn3qj00b536dxqt/Atlas.txt?dl=0 https://www.dropbox.com/s/fns0v7z1o6i8nxm/dw2_delocalized_052.ips?dl=0 One other thing. Regarding your Latin Translation project, are you planning to include any kind of uncensoring element? If you were I would be most grateful to enlist your efforts on that front. As you can see if you look in my patch I restored the Japanese church crosses and sprite for the priest. Unfortunately however, the priest is assigned the color palette of Princess Moonbrooke instead of what should be the Prince Midenhall/Laurasia palette. And then there's the matter of replacing the ghosts with original coffins but let me know if you're even interested in this graphical stuff. I must say that your long deceased Roman audience probably wouldn't have approved much of censorship--though the Latin speaking clergy would have demanded it I'm sure.  :D *** Update After my newest round of updates, King Samaltria's Kee Kee text went away. :o This is after it stuck around through the last 3 updates or so. I don't get it! Seems more like a bug in the game itself than a bug from the insertion Title: Re: General NES Hacking Questions Post by: abw on December 09, 2018, 01:58:13 pm The King of Cannock/Samaltria has some bugged text prior to his normal text. See the two pics below. The text seems to be from Hargon's Temple after you use the Charm/Eye of Rubiss to dispel the illusion [...] After my newest round of updates, King Samaltria's Kee Kee text went away. :o This is after it stuck around through the last 3 updates or so. I don't get it! Seems more like a bug in the game itself than a bug from the insertion This turns out to be a little bit of column A, a little bit of column B. On the insert side, the game has a hardcoded value that tells it which pointer within the pointer table at ROM 0xB762-0xB7C1 starts reading from ROM bank 2 instead of ROM bank 5, but the insert script didn't update that value, so the pointer at ROM 0xB7BE started reading text from the wrong ROM bank, which threw off the end token counting before the script reached RAM $BFD7 and resulted in the wrong string being displayed. This is easily fixable by adding a COUNTER variable and an AUTOCMD for writing the counter like in the Dragon Quest IV example that ships with abcde, except Dragon Warrior II's script is only spread across 2 ROM banks instead of 6, so it's easier for II than IV. Code: [Select] // add this near the top of the insert script: #VAR(pointerNum, COUNTER) // create a COUNTER variable named pointerNum #CREATECTR(pointerNum, 8, 0) // pointerNum is an 8-bit value initialized to 0 #AUTOCMD($17FE7, #WLB(pointerNum, $3FA90)) // update the code that controls which pointer starts the next bank // and then after every #W16 line in the insert script, add: #INC(pointerNum, 1) On the game side, there is a bug where the game takes the pointer to the start of the desired group of 16 strings and reads the first 2 bytes from the pointer's target *before* it runs through the code for checking whether the current string address has reached RAM$BFD7, so if you're unlucky enough to have a pointer point to RAM $BFD6 or$BFD7 like happened with your linked insert script, the text for that pointer gets seriously screwed up. There's currently no way for abcde to handle that situation for you automatically, so unfortunately you're just going to have to keep an eye on the pointer values, and if you see $BFD6 or$BFD7 come up, you can knock the #AUTOCMD($17FE7, *) addresses back to$17FE5 or $17FE6 respectively to compensate. It means you lose 1 or 2 bytes of script space, but that's probably easier to deal with than making the ASM changes required to fix the bug in the original game. Both of these issues only show up depending on where the dividing line between ROM banks 5 and 2 falls in your new script compared to the original script, which explains why they would come and go as you continue updating and re-inserting your script. One other thing. Regarding your Latin Translation project, are you planning to include any kind of uncensoring element? If you were I would be most grateful to enlist your efforts on that front. As you can see if you look in my patch I restored the Japanese church crosses and sprite for the priest. Unfortunately however, the priest is assigned the color palette of Princess Moonbrooke instead of what should be the Prince Midenhall/Laurasia palette. And then there's the matter of replacing the ghosts with original coffins but let me know if you're even interested in this graphical stuff. I must say that your long deceased Roman audience probably wouldn't have approved much of censorship--though the Latin speaking clergy would have demanded it I'm sure. :D I generally try to stick pretty close to the source material for various reasons (e.g. nostalgia, insufficienct fluency in Japanese, etc.), so as a rule of thumb, if my source was censored, so is my translation; if my source was not censored, neither is my translation. I can do graphics hacking if I have to, but while I appreciate the results of other people's graphics hacks, making my own just doesn't have the same appeal as translating does. As for the ancient Romans, well, they were no strangers to censorship ("censor" itself is a Latin word), but I think a bigger problem there would be the lack of hardware and compatible power sources to play the translation on :P. Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 09, 2018, 02:31:57 pm Thank you for looking at this! While inserting the lines you quoted in my Atlas.txt are easy enough, if I am to undertand correctly, even with the COUNTER variable and AUTOCMD added, there is still a possibility of seeing these kinds of issues pop up. This leads me to the second fix: if you see$BFD6 or $BFD7 come up, you can knock the #AUTOCMD($17FE7, *) addresses back to $17FE5 or$17FE6 respectively to compensate. It means you lose 1 or 2 bytes of script space, but that's probably easier to deal with than making the ASM changes required to fix the bug in the original game. I don't exactly understand the above instructions. Could you explain to me a little more clearly what you mean by "knock the #AUTOCMD($17FE7, *) addresses back to$17FE5 or $17FE6"? First, how do I see when$BFD6 or $BFD7 come up and then how do I actually knock them back to the other addresses. And finally, my attempt to rally you to the cause of graphics work went exactly as expected. I can't be blamed for trying though! Title: Re: General NES Hacking Questions Post by: Choppasmith on December 09, 2018, 03:24:59 pm Hey guys, with me finishing DW1, it's time for me to throw my hat into the text editing ring as well. I've been following this thread and I should be all caught up as far as having all the stuff I need. Big thanks to you abw for your help :) abw, I noticed your table says you haven't figured out those control codes, but there's one in particular that's bugging me. In the python script posted on the first script there's a standalone byte, F2, which is a special "s" byte. It's used in the "piece<s> of gold" dictionary entry. I would take a good guess that this is the same as the EF byte in DW1 that adds or omits an S based of the number value used in the string. I'd hope to use this to change the battle message "[name][’ ]s HP is reduced by [number]." into "[name] takes [number] point(s) of damage". Is there a way I can update the table to use this standalone byte? Title: Re: General NES Hacking Questions Post by: abw on December 09, 2018, 04:44:32 pm I don't exactly understand the above instructions. Could you explain to me a little more clearly what you mean by "knock the #AUTOCMD($17FE7, *) addresses back to $17FE5 or$17FE6"? First, how do I see when $BFD6 or$BFD7 come up and then how do I actually knock them back to the other addresses. Oh, sorry, I meant you'll need to check the values of the pointers in the pointer table at ROM 0xB762-0xB7C1 (based on your current script size, 0xB7BE is the most likely culprit) following an insert to see if any of the values are $BFD6 or$BFD7 (a.k.a. D6 BF or D7 BF in little endian), and if so, then adjust the AUTOCMD commands in the insert script to fire when the insert point reaches $17FE5 or$17FE6 instead of $17FE7. Basically, we need to make sure the game never tries to read script data at or after RAM$BFD7 / ROM 0x17FE7, and since the game unconditionally reads the first two bytes of the string based on the pointer value, pointer values of $BFD6 or$BFD7 are bad. And finally, my attempt to rally you to the cause of graphics work went exactly as expected. I can't be blamed for trying though! Based on your work from DW1, I think you'll be able to manage the graphics just fine without me :P. Hey guys, with me finishing DW1, it's time for me to throw my hat into the text editing ring as well. I've been following this thread and I should be all caught up as far as having all the stuff I need. Big thanks to you abw for your help :) Welcome to the party! (There's music for that in this game!) I figured you'd show up here sooner or later ;). abw, I noticed your table says you haven't figured out those control codes, but there's one in particular that's bugging me. I also assume the F2 code in DW2 works the same way as EF in DW1, so if you wanted to re-jig the dictionary to make that code available separately as opposed to being usable only as part of the "piece[(s)] of gold" entry, feel free. The process I described a couple of posts ago (https://www.romhacking.net/forum/index.php?topic=27053.msg367159#msg367159) should work just as well for this dictionary update as for "King" -> "Queen". The codes I haven't bothered to check out are the F9, FD, and FF codes. Based on its usage in the script, F9 is probably some sort of item ID, though it'll take some research to figure out if and how it differs from F7. Some light experimentation suggested that FD might be useless, and FF looks like it might be some kind of conditional end token, but again, I haven't really looked at them to find out, so those are just guesses. Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 09, 2018, 05:06:50 pm Based on your work from DW1, I think you'll be able to manage the graphics just fine without me :P. Hah. I'm glad my DW1 work *looks* impressive but truthfully all of that was my very first foray into hacking and more a result of elbow grease than actual skill. Most of it was simply spamming tiles from the famicom version all over the tile grid with a ton of duplication. The in game engine is still having the sprites point left, right, up, down etc but all of those directional tiles were simply replaced with the front facing famicom tiles so that they would display appropriately. The issues were when the US game would use mirrored images of tiles and that would put me in a spot where the famicom sprites would inevitably get screwed up. I would have to do little tricks to make the sprites more symmetrical like change the old wizard guy's head so his cap no longer leans to one side, or remove the guards spear because his body had to by symmetrical, all to work around the fact I couldn't disable the mirroring or point the code to different tiles. Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 23, 2018, 04:19:50 pm @abw Since I'm pretty much done with my DW2 retranslation (perpetual rephrasings aside), I thought I would go back and give the first Dragon Warrior a bit of the old in out (or should I say "out in") with abcde. I figured I would let you and choppasmith have some time to bring your DW2 projects forward before I start assaulting you guys with requests for help on running extract and insert routines on DW3. I got the table file that Choppasmith used, (byte format as opposed to the bit format one you sent me for DW2--I assume they both will work) and was pleasantly reminded to save it in UTF-8 per your readme. For the Cartographer.txt, I got the text pointer data from data crystal. I removed the auto jump instructions as I figured that would be DW2 specific. Gave it a shot and I got a ton of the same error message in command prompt: substr outside of string at C:\Rom Editing/abcde/Table/Table.pm line 234. It did spit out a very limited cartographer out file which I'll post a bit of immediately below. No in game text was captured in the output. //POINTER #16 @ $8032 - STRING #16 @$16FFE #W16($8032) // current address:$16FFE //POINTER #17 @ $8034 - STRING #17 @$1769B #W16($8034) // current address:$1769B //POINTER #18 @ $8036 - STRING #18 @$17A75 #W16($8036) // current address:$17A75 This is what my Cartographer.text looks like below: (FYI I don't really need the comments for existing script turned on. I'm really only refining and expanding what I already wrote while I consult outside of game resources for the Japanese text.) #GAME NAME:      Dragon Warrior I - Edit.nes #BLOCK NAME:      Main Script #TYPE:         NORMAL #METHOD:      POINTER_RELATIVE #POINTER ENDIAN:   LITTLE #POINTER TABLE START:   $8012 #POINTER TABLE STOP:$8037 #POINTER SIZE:      $02 #POINTER SPACE:$00 #STRINGS PER POINTER:   16 #ATLAS PTRS:      Yes #BASE POINTER:      $C010 #TABLE: dw1_script.tbl #COMMENTS: No #END BLOCK I'm not sure if anything else needed to be changed. I figure pointer size would be the same. I'm not sure if DW1 also used 16 strings per pointer. Let me know if you think something is wrong here or if something needs to be tweaked with the table file for whatever reason. (attached below) https://www.dropbox.com/s/h2uxum9jurl0jcb/dw1_script.tbl?dl=0 Title: Re: General NES Hacking Questions Post by: abw on December 30, 2018, 04:12:13 pm Try it again with "#BASE POINTER:$10" instead - the issue here is that a pointer value of e.g. $8028 + a base value of$C010 = $14038, but the entire ROM (including header) is only$14010 bytes, so we end up trying to read beyond the end of the file, which doesn't work so well. The other thing that might trip you up a bit is that the final pointer only has 10 strings instead of 16, so for a cleaner dump you'll want to split the main script in to two blocks: Code: [Select] #BLOCK NAME: Main Script #TYPE: NORMAL #METHOD: POINTER_RELATIVE #POINTER ENDIAN: LITTLE #POINTER TABLE START: $8012 #POINTER TABLE STOP:$8035 #POINTER SIZE: $02 #POINTER SPACE:$00 #STRINGS PER POINTER: 16 #ATLAS PTRS: Yes #BASE POINTER: $10 #TABLE: dw1_script.tbl #COMMENTS: No #END BLOCK // only 10 strings for the final pointer #BLOCK NAME: Main Script #TYPE: NORMAL #METHOD: POINTER_RELATIVE #POINTER ENDIAN: LITTLE #POINTER TABLE START:$8036 #POINTER TABLE STOP: $8037 #POINTER SIZE:$02 #POINTER SPACE: $00 #STRINGS PER POINTER: 10 #ATLAS PTRS: Yes #BASE POINTER:$10 #TABLE: dw1_script.tbl #END BLOCK Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 30, 2018, 11:08:16 pm Try it again with "#BASE POINTER: $10" instead - the issue here is that a pointer value of e.g.$8028 + a base value of $C010 =$14038, but the entire ROM (including header) is only $14010 bytes, so we end up trying to read beyond the end of the file, which doesn't work so well. The other thing that might trip you up a bit is that the final pointer only has 10 strings instead of 16, so for a cleaner dump you'll want to split the main script in to two blocks: Code: [Select] #BLOCK NAME: Main Script #TYPE: NORMAL #METHOD: POINTER_RELATIVE #POINTER ENDIAN: LITTLE #POINTER TABLE START:$8012 #POINTER TABLE STOP: $8035 #POINTER SIZE:$02 #POINTER SPACE: $00 #STRINGS PER POINTER: 16 #ATLAS PTRS: Yes #BASE POINTER:$10 #TABLE: dw1_script.tbl #END BLOCK // only 10 strings for the final pointer #BLOCK NAME: Main Script #TYPE: NORMAL #METHOD: POINTER_RELATIVE #POINTER ENDIAN: LITTLE #POINTER TABLE START: $8036 #POINTER TABLE STOP:$8037 #POINTER SIZE: $02 #POINTER SPACE:$00 #STRINGS PER POINTER: 10 #ATLAS PTRS: Yes #BASE POINTER: $10 #TABLE: dw1_script.tbl #COMMENTS: No #END BLOCK @abw, thank you as always for that and the explanation. Unfortunately however the results were not successful. The output file contains a little bit of the script in a mashed together format with no delineation as I had with DW2. After the small section of script there is this massive block of information in byte format. Here's a copy if you want to take a glance. I'm scratching my head as to what would be causing this result. https://www.dropbox.com/s/4tvfmtab1upy3vu/Cartographer_out.txt?dl=0 my command line instruction for output is basically the same as the one you recommended for DW2. Should any of that change in the case of this game? perl abcde.pl -m bin2text -cm abcde::Cartographer "Dragon Warrior I - Edit.nes" Cartographer.txt Cartographer_out -s Title: Re: General NES Hacking Questions Post by: abw on December 31, 2018, 01:06:04 am Ah, yes, that table file doesn't mark any of the table entries as end tokens, and with no end tokens there's nothing to say when to stop dumping, so each string keeps going until the end of the ROM :P. It also doesn't contain any newline markers, so even after fixing the end tokens you'd still end up with all 16 of each pointer's strings on one line, which didn't feel right to me. Here's the table file I've been using; feel free to adjust the \n linebreaks to match your desired format. Code: [Select] 00=0 01=1 02=2 03=3 04=4 05=5 06=6 07=7 08=8 09=9 0A=a 0B=b 0C=c 0D=d 0E=e 0F=f 10=g 11=h 12=i 13=j 14=k 15=l 16=m 17=n 18=o 19=p 1A=q 1B=r 1C=s 1D=t 1E=u 1F=v 20=w 21=x 22=y 23=z 24=A 25=B 26=C 27=D 28=E 29=F 2A=G 2B=H 2C=I 2D=J 2E=K 2F=L 30=M 31=N 32=O 33=P 34=Q 35=R 36=S 37=T 38=U 39=V 3A=W 3B=X 3C=Y 3D=Z 3E=‟ 3F=” # 40 and 53 are visually identical, but 40 is used as a right single quotation mark while 53 is used as an apostrophe 40=’ 41=* 42=[triangle-r] 43=[triangle-d] 44=: 45=[..] 47=. 48=, 49=- 4B=? 4C=! 4D=; 4E=) 4F=( 50=‘ # 51 is visually identical to 50, but does not trigger "speech" sound effects or auto-indentation 51=[‘; no sound, no indent] 52=[.’] # 53 is the apostrophe version of 40 53=' 54=[ ’] 5F= F0=[ Point(s)] F1=[(n )enemy name] F3=[experience Point(s)] F4=[enemy name] F5=[number] F6=[spell name] F7=[item name] F8=[HERO] FB=[wait] # FC is the end token used in the main script /FC=[end]\n\n FD=[line]\n # FF is the end token used in the item/monster/spell lists /FF=[/name]\n Title: Re: General NES Hacking Questions Post by: Chicken Knife on December 31, 2018, 01:45:13 am This worked perfectly. I'll have to study the format you used here for when I get through this revision and eventually move on to doing the same for DW3. I really want to develop the skill level to do this on my own, but the thing I'm most foggy on is what kind of process you use to figure out all these opcodes. Would you do an initial extraction with just letters and the line breaks and then use context clues to flesh out the opcodes? Title: Re: General NES Hacking Questions Post by: abw on December 31, 2018, 01:17:34 pm This worked perfectly. Hurray! I really want to develop the skill level to do this on my own, but the thing I'm most foggy on is what kind of process you use to figure out all these opcodes. Every game is a world unto itself, so the only (as far as I know, anyway) guaranteed way to figure out how and where the game stores its text is by getting that text to display in a debugging emulator and tracing the code from the point where it writes the text to screen all the way backwards until you find the code that read the text from ROM. You'll need to be at least somewhat familiar with the console's hardware, the emulator's debugging features, and whatever variety of ASM the console uses, and it's usually a pretty time-consuming process, but it will give you highly detailed knowledge of how the game handles its text. Frequently, though, there are indeed easier ways. Would you do an initial extraction with just letters and the line breaks and then use context clues to flesh out the opcodes? For games with simple text encodings like Dragon Warriors I and III, yeah, that's probably the fastest way to go about it. Making a table file based on how the individual character tiles are laid out in the PPU and then dumping a block of text (if you already know where the text and/or pointers are) or even loading the ROM and table file in a hex editor and running a quick visual scan through the ROM for blocks that look like text will go a long way towards getting a usable script dump. After that, it's usually not too hard to guess what any other codes appearing in the dump are supposed to do, especially if you're already familiar with what the text is supposed to say (e.g. due to playing the game obsessively as a child) or can find the string in-game. For ones you aren't too sure about, experimenting by removing them from the string or adding them elsewhere and observing how the game's behaviour changes will often shed more light on their purpose, and if all else fails, you can start diving in to the ASM. For games with slightly trickier text encodings like the main scripts of Dragon Warriors II and IV, even if you know how the individual characters are laid out in the PPU, the game doesn't store the text that way in ROM and you're never going to be able to just guess how the text actually is stored within any reasonable time frame (unless somebody else has already done a bunch of work for you, thus allowing you to make better guesses), so the simple methods aren't going to work there and tracing through the game's code is probably the best way of finding out what in the world it's doing under the hood. Title: Re: General NES Hacking Questions Post by: Chicken Knife on January 01, 2019, 07:42:07 pm Every game is a world unto itself, so the only (as far as I know, anyway) guaranteed way to figure out how and where the game stores its text is by getting that text to display in a debugging emulator and tracing the code from the point where it writes the text to screen all the way backwards until you find the code that read the text from ROM. You'll need to be at least somewhat familiar with the console's hardware, the emulator's debugging features, and whatever variety of ASM the console uses, and it's usually a pretty time-consuming process, but it will give you highly detailed knowledge of how the game handles its text. Frequently, though, there are indeed easier ways. This brings me to a question. Over the last couple weeks I've been making a concerted effort to learn more about the NES architecture, study tutorials on the basics of ASM, and have spent more time playing with the FCEUX debugger. The 6502 NES ASM tutorials usually focus on teaching the basics of building an NES game from the ground up, which is good (if overwhelming) information but doesn't necessarily hone in on what I really want to be learning, which is what you highlighted--using the debugger to follow the data trails back to the rom code that dictates them. I've spent a lot of time looking for videos or tutorials around debugging and haven't come up with much. The only substantial information I've encountered is what was included in the FCEUX files themselves. Do you happen to recall anything in particular that helped you learn the process of tracing data back to its source? I think this is what would help me most not only with resolving some of these script editing issues but it would also help me track down how tile address instructions are stored in the code for the graphics work I need to do. Title: Re: General NES Hacking Questions Post by: abw on January 02, 2019, 10:39:17 am Do you happen to recall anything in particular that helped you learn the process of tracing data back to its source? Nothing specifically, alas. At a high level, the process is fairly simple: start with something you know (e.g. some text currently being displayed), figure out how that something got to be where it is / where it came from, and then keep repeating until "where it came from" is the ROM. The details can sometimes get messy, but FCEUX provides a pretty good collection of debugging tools for helping you do all that figuring out. The Dragon Warrior code seems to be very fond of copying data back and forth between multiple staging areas in cartridge RAM before writing to the PPU, so even if you've mastered the use of FCEUX's Debugger, using its Trace Logger is probably still going to be a quicker way to find out useful information for these particular games. I wrote up a detailed description of tracking down DW1's item list and the pointer to it using FCUEX's Trace Logger in the Dragon Warrior 1 Spanish Translation (http://www.romhacking.net/forum/index.php?topic=26135.msg358879#msg358879) thread; you might find that helpful if you haven't read it already. Title: Re: General NES Hacking Questions Post by: Chicken Knife on January 06, 2019, 10:45:39 pm @abw, So I started the process of reinserting the script changes as I go. My main motivation for doing so was that I was hoping to be able to track how many characters worth of data remained available, as that indicator was very helpful while working on II. After my first insert of my DW1 script, the command prompt curiously gave me no indication of remaining data afterwards. Other than that, the insertion seems to be working within the game. I believe I have some data available since I eliminated a ton of blank character spaces in the existing script. Am I missing something that turns that on this time around? This is the data I added to the top of my Atlas file. // Define, load, and activate a TABLE #VAR(Table, TABLE) #ADDTBL("dw1_script.tbl", Table) #ACTIVETBL(Table) // Jump to start of script #JMP($8038) #HDR($10) I didn't include the auto commands for mid-string swaps/counter variables, etc that were used in my Atlast file for DW2 I suppose I can pull the game up for myself in a hex editor to gauge how I'm doing with space but that command prompt indicator I always got before was very helpful. Title: Re: General NES Hacking Questions Post by: abw on January 07, 2019, 09:34:43 pm Am I missing something that turns that on this time around? Yup - without an upper bound on the insert range, there's no good way to calculate how much available space remains. abcde will just keep on inserting data until either it runs out of data to insert or your computer runs out of disk space (though I haven't tried that last one myself) :P. Try changing the jump command to e.g. "#JMP($8038, $BCBF)" and you should get some better feedback. On a side note, I just submitted an update for abcde. Nothing major, but there are a couple of goodies that will be helpful if you end up doing some menu work on these games. Title: Re: General NES Hacking Questions Post by: Choppasmith on February 06, 2019, 07:47:17 pm So after a lot of editing (and I still wouldn't say it's 100% done) I managed to edit my script dump with the mobile script. It's 80K, about 50% bigger than the original. I had to edit it a few times to fix some errors, thankfully abcde makes it easy to find what the troubled line is. I then get this: (https://i.imgur.com/DbrUmm6.png) So far so good? But then I get some scrambled text: (https://i.imgur.com/DrJGHVe.png) Looks like I need to double check my dictionary values. I hope that's my only problem. EDIT: Yep dictionary and table needed a tiny tweak (https://i.imgur.com/GxaFTL2.png) Though, I think there's something funny with the special table provided in this thread. The combined .' for dialog works fine but the standalone closing quote/apostrophe just brings up the openign single quote instead (https://i.imgur.com/9vw9uVS.png) Also, getting some scrambled text. I wonder if this is from being over the limit. There's no way to immediately tell as said on the last page. (https://i.imgur.com/8yK2JXO.png) EDIT 2: Yeah the Midenhall castle NPC dialog is near the end of the first bank. If I'm reading it right, would using the script on the previous page be a way of transferring dialog/pointers to the second bank? Title: Re: General NES Hacking Questions Post by: abw on February 06, 2019, 10:28:36 pm So after a lot of editing (and I still wouldn't say it's 100% done) I managed to edit my script dump with the mobile script. It's 80K, about 50% bigger than the original. I had to edit it a few times to fix some errors, thankfully abcde makes it easy to find what the troubled line is. You might want to grab the latest version (0.0.3) of abcde - I made some improvements like letting you know exactly how far over your limit the script goes. For your massive script, DW2 comes with 3 whole empty banks, so at least you shouldn't have to expand the ROM this time. If you haven't come across it yet, the code for switching between script banks starts at 0x3FE15 /$0F:$FE05, so that seems like a good place to start rewriting ASM. Though, I think there's something funny with the special table provided in this thread. The combined .' for dialog works fine but the standalone closing quote/apostrophe just brings up the openign single quote instead Yeah, like I said, I hadn't put much effort into the table file at that point, and I've noticed a couple of mistakes in it. I've uploaded my current table file here (https://drive.google.com/open?id=1MnB--MNEd4BG_IOxt4qZcwq3y6kigmMr) if you're interested, but if you've been modifying the dictionary, you'll also want to ensure that your dictionary does in fact contain the correct value. Title: Re: General NES Hacking Questions Post by: Choppasmith on February 13, 2019, 02:42:50 pm You might want to grab the latest version (0.0.3) of abcde - I made some improvements like letting you know exactly how far over your limit the script goes. For your massive script, DW2 comes with 3 whole empty banks, so at least you shouldn't have to expand the ROM this time. If you haven't come across it yet, the code for switching between script banks starts at 0x3FE15 /$0F:$FE05, so that seems like a good place to start rewriting ASM. Yeah, like I said, I hadn't put much effort into the table file at that point, and I've noticed a couple of mistakes in it. I've uploaded my current table file here (https://drive.google.com/open?id=1MnB--MNEd4BG_IOxt4qZcwq3y6kigmMr) if you're interested, but if you've been modifying the dictionary, you'll also want to ensure that your dictionary does in fact contain the correct value. Thanks for that, I've updated my table. I tried using the newest version of your abcde, I now get: "#JMP bounded by at$BE0F has -15738 ($-3D7A) space left" So I take it this means my script overwrites 15,738 bytes over the limit of the bank? Would cutting down my script temporarily be a good way to indicate where would be a good stopping point for the first bank? (I know I can't just go into the ROM and see where it overwrites data) BTW, maybe I'm missing something (and I'll be honest, I'm earnest in learning but a lot of this stuff still makes me go @_@) Why is it in the ROM Map, The Main Script Part 1 starts at B7C2 and Part 2 starts at 14010 but in the atlas script they're the other way around? Title: Re: General NES Hacking Questions Post by: abw on February 13, 2019, 06:38:46 pm Thanks for that, I've updated my table. I tried using the newest version of your abcde, I now get: "#JMP bounded by at$BE0F has -15738 ($-3D7A) space left" So I take it this means my script overwrites 15,738 bytes over the limit of the bank? Would cutting down my script temporarily be a good way to indicate where would be a good stopping point for the first bank? (I know I can't just go into the ROM and see where it overwrites data) If you want to get technical about it, the script space is already split between 2 banks, so that means you're 15,738 bytes over the total script space limit rather than the bank limit, but basically, yes. For finding out how much of your script fit into the first bank, you can check 0x3FA90 to see which pointer starts the next bank; if your new script is twice as big as the original, you'll probably have something like$17 there instead of $2E. Temporarily cutting down your script is probably the fastest way to find out how much fits into the original script space, though since it looks like you're going to need 3 banks anyway, you may wish to consider moving the script space from 0xB7C2-0xBE0F to one of the unused banks; some of DW2's bankswap routines are limited to swapping in one of the first 8 banks, so having lots of free space in bank 3 might come in handy if you plan on making other changes too. BTW, maybe I'm missing something (and I'll be honest, I'm earnest in learning but a lot of this stuff still makes me go @_@) Why is it in the ROM Map, The Main Script Part 1 starts at B7C2 and Part 2 starts at 14010 but in the atlas script they're the other way around? Oh, sorry, that was my fault - I was in "sort things by ROM address" mode :P. I've corrected the wiki now. Title: Re: General NES Hacking Questions Post by: Choppasmith on March 01, 2019, 02:50:31 pm Sorry for the late reply, just been finding no time to spare lately, but I'm still determined to learn and get this figured out. So I found that I can fit up to "Pointer 34" in the original space (with 818 bytes left over which is plenty for possible edits and additions) leaving the text from Pointer 35-45 and the last little bit of "Script Part 2" left. I also inserted just the script AFTER Pointer 34 into the original space and got 1401 bytes left. So by doing a little bit of math subtracting those numbers from 17,955 (the combined amount of bytes for script space according to the ROM Map) means the first part of my script that fits is 17,137 bytes and the remaining is 16,554 meaning my whole script is 33,691 bytes. So yeah despite different text file sizes, the actual script data is about double. So I'm already a little confused here... Firstly For finding out how much of your script fit into the first bank, you can check 0x3FA90 to see which pointer starts the next bank; if your new script is twice as big as the original, you'll probably have something like$17 there instead of $2E. So what am I looking at here (at 3FA90) exactly? It certainly says 2E. What does that signify? And secondly, more a noobish question, but am I understanding correctly that the Wiki stating the game has 16 PRG-ROM Pages that are 16KB each referring to the banks? If I do switch banks for dialog, and yes I can see there are plenty of blank patches of data, is there an easy way to determine what bank is where aside from trying to divide the bytes of the rom by 16? I feel like I'm missing something glaringly obvious. As far as doing anything else. The only thing I might do is what you and Chicken Knife talked about in moving the Monster Names. I had shortened the names just enough to fit, but having that extra little bit of space for having full names would be nice. Script is the priority for me though! Anyway, once again, I appreciate the help and feel free to tell me just enough so I can figure it out while learning. Certainly don't want to come off as the type who just wants stuff done for them! Title: Re: General NES Hacking Questions Post by: abw on March 01, 2019, 06:52:28 pm So what am I looking at here (at 3FA90) exactly? It certainly says 2E. What does that signify? What you've got there is the key byte inside the routine for deciding which ROM bank needs to be swapped in to get the string the game wants to display: Code: [Select] ; determine which ROM bank to load based on string index ; IN: A = low byte of string index, X = high byte of string index << 5 (don't ask me why), Y and C irrelevant ; OUT: no change to A or X, Y = index of ROM bank to swap in to read the string from, C not used, but does indicate whether we ended up choosing bank 5 (clear) or 2 (set) ; control flow target (from$FA68) 0x03FA7E|$0F:$FA6E:A0 05    LDY #$05 ; default to bank 5 0x03FA80|$0F:$FA70:48 PHA ; save the original string index in A 0x03FA81|$0F:$FA71:29 F0 AND #$F0 ; useless op 0x03FA83|$0F:$FA73:4A      LSR 0x03FA84|$0F:$FA74:4A      LSR 0x03FA85|$0F:$FA75:4A      LSR 0x03FA86|$0F:$FA76:4A      LSR 0x03FA87|$0F:$FA77:85 10    STA $10 ; 16 strings per pointer, so low byte of string index >> 4 = low nybble of pointer index 0x03FA89|$0F:$FA79:8A TXA 0x03FA8A|$0F:$FA7A:29 E0 AND #$E0 0x03FA8C|$0F:$FA7C:4A      LSR ; A = high nybble of pointer index 0x03FA8D|$0F:$FA7D:05 10    ORA $10 ; glue the high and low nybbles of the pointer index together into a single byte 0x03FA8F|$0F:$FA7F:C9 2E CMP #$2E 0x03FA91|$0F:$FA81:90 02    BCC $FA85 ; if the pointer index < #$2E, keep the bank 5 default 0x03FA93|$0F:$FA83:A0 02    LDY #$02 ; if the pointer index >= #$2E, load bank 2 instead ; control flow target (from $FA81) 0x03FA95|$0F:$FA85:8C C6 60 STY$60C6 0x03FA98|$0F:$FA88:68      PLA ; restore the original string index in A ; control flow target (from $D12B) ; external bank control flow target (from$06:$9542,$06:$9550) 0x03FA99|$0F:$FA89:60 RTS If you added the #INC commands I mentioned in an earlier post (http://www.romhacking.net/forum/index.php?topic=27053.msg367369#msg367369), that #$2E should have been updated to reflect the new dividing line between bank 5 and 2. The good news is that it looks like you are pretty close to being able to fit your script into just 2 banks, and if you can manage that, then all you need to do is insert the second half of your script into one of the unused banks and update 0x3FA94 to the new bank number. Do you think you can update the dictionary to get an extra 3% compression? And secondly, more a noobish question, but am I understanding correctly that the Wiki stating the game has 16 PRG-ROM Pages that are 16KB each referring to the banks? Yup! is there an easy way to determine what bank is where aside from trying to divide the bytes of the rom by 16? I feel like I'm missing something glaringly obvious. Math is the way (especially when you're flipping between games with different bank sizes), but this shouldn't be something you need a calculator for. 16 KB = $4000, so (ignoring the$10 byte iNES header), $0000 -$3FFF is bank 0, $4000 -$7FFF is bank 1, $8000 -$BFFF is bank 2, $C000 -$FFFF is bank 3, and so on. So if you have an address like 0x3FA90, you can look at the 3F part and say something to yourself like "3 x 4 = 12, F is in bank 3, and 12 + 3 = 15" to know that 0x3FA90 is in ROM bank 15. Anyway, once again, I appreciate the help and feel free to tell me just enough so I can figure it out while learning. Certainly don't want to come off as the type who just wants stuff done for them! Heh, the first game was free, but you're going to have to work for the second one :D. Title: Re: General NES Hacking Questions Post by: Choppasmith on March 02, 2019, 12:23:48 pm Heh, the first game was free, but you're going to have to work for the second one :D. Sounds fair :) I'll get this slowly but surely. Okay, so, in updating my dictionary, I had to update the table because I was still unable to use apostrophes and single closing quotes. Part of my new dictionary table looks like this (taken from your latest table) Code: [Select] %1110110101=' %1110110110=- %1110110111=’ I do the insert.bat and it stops at a line at Pointer 26 Code: [Select] unable to tokenize; best attempt failed at input position 160 at ^ indicator in ]'s coffin.[end-FC]ΓÇÿ┬ì'Tis more than a ma ]'s coffin.[end-FC]ΓÇÿ^ (does your table file contain a "┬ì"?) in text string starting at Atlas.txt line 1127! And here's the line as it appears in my atlas.txt Code: [Select] puts it back in [name]'s coffin.[end-FC] ‘'Tis more than a man could ask to know such elation at so advanced an age!’[wait][line] Really baffling what happened here. It's not like this is the first apostrophe in the text. Also, while I updated the dictionary to save a few bytes (adding a space for "gold", "Goddess", and "key") I found that I missed a bunch of post-final boss NPC dialog that needed adding, so droping it 3% might be out unfortunately. Title: Re: General NES Hacking Questions Post by: abw on March 02, 2019, 06:13:33 pm I do the insert.bat and it stops at a line at Pointer 26 Code: [Select] unable to tokenize; best attempt failed at input position 160 at ^ indicator in ]'s coffin.[end-FC]ΓÇÿ┬ì'Tis more than a ma ]'s coffin.[end-FC]ΓÇÿ^ (does your table file contain a "┬ì"?) in text string starting at Atlas.txt line 1127! And here's the line as it appears in my atlas.txt Code: [Select] puts it back in [name]'s coffin.[end-FC] ‘'Tis more than a man could ask to know such elation at so advanced an age!’[wait][line] Really baffling what happened here. It's not like this is the first apostrophe in the text. Hmm, that does look odd. Based on that output, it looks like your system is defaulting to code page 437 (where "ΓÇÿ" = E2 80 98 = the UTF-8 encoding of U+2018 = "‘"), which means "┬ì" = C2 8D = the UTF-8 encoding of U+008D = Reverse Line Feed, but I have no idea how that managed to get in there. So, it looks like you've got an invisible character sitting between the "‘" and the "'" for some reason (you should be able to verify that hypothesis by viewing your insert script with a hex editor); try deleting that section and re-type the "‘'". Also, here's a tip from abcde's readme: if you do "chcp 65001" in cmd before running abcde (e.g. at the start of a .bat file), you should get abcde's UTF-8 output showing up as UTF-8 characters instead of whatever code page your system defaults to... though in a case like this involving a non-printable control code, having gobbledygook is probably more helpful in tracking down the cause of the error! Also, while I updated the dictionary to save a few bytes (adding a space for "gold", "Goddess", and "key") I found that I missed a bunch of post-final boss NPC dialog that needed adding, so droping it 3% might be out unfortunately. Ah, if you have more text to add anyway, then yeah, trying to fit everything into 2 banks might be more work than just using 3 banks. So much for only needing a 1-byte hack :P. Title: Re: General NES Hacking Questions Post by: Choppasmith on March 05, 2019, 11:14:58 am Okay figured out the weird little script bug (you were right, there was some weird hex between the opening quote and the apostrophe). And with the recent dialog added 34,221 bytes. Just to be sure. Your abcde script enters the script in both areas, right? I just want to make sure my math is right. The space remaining that I get when inserting part of it is the total amount between the two sections? Not just one? Because it seems like the best free space to use as a third bank is 1CC30 which is over 13K. Old Script Part 1: 3FD7 Length (16, 343) Old Script Part 2: 64E Length (1,614) Total: 17,957 And to use a third bank I'm basically going to have to duplicate this part of the code: Code: [Select] 0x03FA91|$0F:$FA81:90 02    BCC $FA85 ; if the pointer index < #$2E, keep the bank 5 default 0x03FA93|$0F:$FA83:A0 02    LDY #$02 ; if the pointer index >= #$2E, load bank 2 instead And change the pointer index and bank so that it's something like pointer index < #$22 (Pointer 34) then have something for Pointer 35-45 and then have a line for code 2E (46) equal and greater in value. Is that right? Title: Re: General NES Hacking Questions Post by: abw on March 05, 2019, 05:18:32 pm Okay figured out the weird little script bug (you were right, there was some weird hex between the opening quote and the apostrophe). Hurray! Just to be sure. Your abcde script enters the script in both areas, right? I just want to make sure my math is right. The space remaining that I get when inserting part of it is the total amount between the two sections? Not just one? Yup, yup, and yup. Because it seems like the best free space to use as a third bank is 1CC30 which is over 13K. I'd probably go with one (or two if you need another one) of the completely unused banks starting at 0x30010, 0x34010, or 0x38010 for a full 16k each, but if you can fit all your script into the free space starting around 0x1CC30, then great. And to use a third bank I'm basically going to have to duplicate this part of the code: Code: [Select] 0x03FA91|$0F:$FA81:90 02 BCC$FA85 ; if the pointer index < #$2E, keep the bank 5 default 0x03FA93|$0F:$FA83:A0 02 LDY #$02 ; if the pointer index >= #$2E, load bank 2 instead And change the pointer index and bank so that it's something like pointer index < #$22 (Pointer 34) then have something for Pointer 35-45 and then have a line for code 2E (46) equal and greater in value. Is that right? Basically, yeah. You'll just need to find a home for that code, but bank F is already pretty full. There should be enough unused space for your needs at 0x3FFA7 - 0x3FFC9 (0x3FFCA is used!), though. It also possible there's other code elsewhere that might need to be updated too; I haven't done that analysis for you this time either ;). Title: Re: General NES Hacking Questions Post by: Choppasmith on March 10, 2019, 02:36:55 pm Oh Happy Day! Feeling overwhelmed by the idea of having to move important code around, I took another crack at editing the dictionary, swapping out entries with about 15 or so uses with ones much higher and I got my script down to 32,378 bytes! And I can totally trim the fat in my script if need be. That should be much easier! What a relief! So, if I keep bank 5 as default and go with your suggestion of using the space at 30010 (thanks btw, I didn't really think of looking that far, I didn't think there'd be full unused banks) which would be bank... 10, right? That would change the value of 3FA94 to 0A. Now I just need to add the bit from your earlier post so it can split the script properly. Edit: yes the JMP is updated to reflect the dumping to the new bank. Code: [Select] // add this near the top of the insert script: #VAR(pointerNum, COUNTER) // create a COUNTER variable named pointerNum #CREATECTR(pointerNum, 8, 0) // pointerNum is an 8-bit value initialized to 0 #AUTOCMD($17FE7, #WLB(pointerNum,$3FA90)) // update the code that controls which pointer starts the next bank // and then after every #W16 line in the insert script, add: #INC(pointerNum, 1) #JMP($14010) #HDR($C010) // auto-commands for when DW2 does a mid-string bankswap and resets its read address: #AUTOCMD($17FE7, #HDR($10)) #AUTOCMD($17FE7, #JMP($30010, $3400F)) Edit: yes the JMP is updated to reflect the dumping to the new bank. I seem to be close. Once trying with my full script (as opposed to pieces to determine length and where the best split would be) I get "#JMP bounded by at$3400F has -1223 ($-4C7) space left" so it seems maybe it's still a bit too big? Did I make a miscalculation somewhere? Either way, 1K to trim down should be easy and inconsequential. Title: Re: General NES Hacking Questions Post by: abw on March 10, 2019, 05:05:58 pm Oh Happy Day! Feeling overwhelmed by the idea of having to move important code around, I took another crack at editing the dictionary, swapping out entries with about 15 or so uses with ones much higher and I got my script down to 32,378 bytes! And I can totally trim the fat in my script if need be. That should be much easier! What a relief! Nice! So, if I keep bank 5 as default and go with your suggestion of using the space at 30010 (thanks btw, I didn't really think of looking that far, I didn't think there'd be full unused banks) which would be bank... 10, right? That would change the value of 3FA94 to 0A. Remember how we expanded the DW1 ROM by adding 4 new banks just before the final fixed bank? Enix did basically the same thing (except in hardware) when porting the game to English; they had to add 8 banks, but didn't end up using 3 of those. In bank 5, you've got$17FE7 - $14010 =$3FD7 bytes, and bank 12 ($0C) gives you another$4000 bytes for a total of $7FD7 = 32,727 bytes, which should leave you with about 350 bytes to spare if you managed to get your script down to 32,378 bytes. Is that 32,378 bytes the script size after being converted to DW2's 5/10-bit encoding, or is that the file size when encoded as UTF-8? If you're still having space issues, don't forget about the single-character dictionary entries - if you're e.g. using "p" more than "w", you could kick "w" into the 10-bit range and move "p" into the 5-bit range to get better compression. For English text, you can probably also replace the "q" entry with "qu". Title: Re: General NES Hacking Questions Post by: Choppasmith on March 10, 2019, 05:24:29 pm Nice! Remember how we expanded the DW1 ROM by adding 4 new banks just before the final fixed bank? Enix did basically the same thing (except in hardware) when porting the game to English; they had to add 8 banks, but didn't end up using 3 of those. In bank 5, you've got$17FE7 - $14010 =$3FD7 bytes, and bank 12 ($0C) gives you another$4000 bytes for a total of $7FD7 = 32,727 bytes, which should leave you with about 350 bytes to spare if you managed to get your script down to 32,378 bytes. Is that 32,378 bytes the script size after being converted to DW2's 5/10-bit encoding, or is that the file size when encoded as UTF-8? Well, as said a few posts ago, I split my script into two parts to determine how much of it would fit. I took the bytes remaining as reported by abcde and subtracted from the total space available and did it again for the second part then combined the values which is how I got the 32,378 bytes. I had done the same thing before and I was getting over 34,000 bytes, so even if my math is somehow off it's still a significant decrease. Nice! Remember how we expanded the DW1 ROM by adding 4 new banks just before the final fixed bank? Enix did basically the same thing (except in hardware) when porting the game to English; they had to add 8 banks, but didn't end up using 3 of those. In bank 5, you've got$17FE7 - $14010 =$3FD7 bytes, and bank 12 ($0C) gives you another$4000 bytes for a total of $7FD7 = 32,727 bytes, which should leave you with about 350 bytes to spare if you managed to get your script down to 32,378 bytes. Is that 32,378 bytes the script size after being converted to DW2's 5/10-bit encoding, or is that the file size when encoded as UTF-8? If you're still having space issues, don't forget about the single-character dictionary entries - if you're e.g. using "p" more than "w", you could kick "w" into the 10-bit range and move "p" into the 5-bit range to get better compression. For English text, you can probably also replace the "q" entry with "qu". Yeah that could work. I would assume, in layman's terms, that the first set of entries use less data than the 4 "C" tables? Or does each table use extra bits of data the further you go down? (Edit : forget that I'm dumb)I honestly still don't quite understand how those work, I initially thought it was like a special table where'd you use two bytes to load a dictionary value, and pretty much every word/letter from the dictionary wound be preceded by the corresponding value and then stuff from the first table fills in the rest with no dictionary value needed. Edit : Just read the Wiki, I guess it works like I said above but in bits, not bytes? Title: Re: General NES Hacking Questions Post by: abw on March 10, 2019, 06:12:36 pm Yeah that could work. I would assume, in layman's terms, that the first set of entries use less data than the 4 "C" tables? Or does each table use extra bits of data the further you go down? (Edit : forget that I'm dumb)I honestly still don't quite understand how those work, I initially thought it was like a special table where'd you use two bytes to load a dictionary value, and pretty much every word/letter from the dictionary wound be preceded by the corresponding value and then stuff from the first table fills in the rest with no dictionary value needed. Edit : Just read the Wiki, I guess it works like I said above but in bits, not bytes? Basically, yeah. I had this typed up before I saw your edit, so I'll post it anyway just in case it's still helpful: -- Sort of, except as far as the game's concerned, the individual tokens are 5 bits each, not 8, and 4 of the entries from the first 5-bit table are used to switch to the corresponding "C" table. You could split that single table file into multiple table files if you wanted to; I just prefer having all the entries in a single table. You can tell how much space each entry takes up by looking at its left-hand side. In a hexadecimal table, "80" takes up one byte (or 2 nybbles or 8 bits, depending on how you want to think of it), "80FF" takes up two bytes (/4 nybbles/16 bits), and so on; in a binary table, "11011" takes up 5 bits, "1110101111" takes up 10 bits, and so on. This will be an even bigger issue if you decide to tackle DW4, since it uses a Huffman encoding where the binary representation of individual characters ranges between 3 bits (e.g. for "e") and 18 bits (e.g. for "8"). Title: Re: General NES Hacking Questions Post by: Choppasmith on March 11, 2019, 03:18:44 pm Basically, yeah. I had this typed up before I saw your edit, so I'll post it anyway just in case it's still helpful: -- Sort of, except as far as the game's concerned, the individual tokens are 5 bits each, not 8, and 4 of the entries from the first 5-bit table are used to switch to the corresponding "C" table. You could split that single table file into multiple table files if you wanted to; I just prefer having all the entries in a single table. You can tell how much space each entry takes up by looking at its left-hand side. In a hexadecimal table, "80" takes up one byte (or 2 nybbles or 8 bits, depending on how you want to think of it), "80FF" takes up two bytes (/4 nybbles/16 bits), and so on; in a binary table, "11011" takes up 5 bits, "1110101111" takes up 10 bits, and so on. This will be an even bigger issue if you decide to tackle DW4, since it uses a Huffman encoding where the binary representation of individual characters ranges between 3 bits (e.g. for "e") and 18 bits (e.g. for "8"). Thanks, once I saw that the programmer's calculator had a binary converter, it just clicked. Anyway, just from doing the above suggestion I'm down another 400 bytes! And just after some needed editing of the intro I'm now down to 760 bytes over the limit. I was really loose and liberal with my initial script (like gratuitous line breaks, pauses, and spaces), so it should be easy to trim the fat without sacrificing much. Although when I tried dumping with the new second address, I'd get horrible garbage dialog. The intro is only the 8th pointer in bank 5, so I'm not too sure what happened there. I know I have to update the ROM with the new second bank, but I'm not sure why dialog in the beginning wouldn't work properly. Did I miss something (other than updating the second bank?) A couple other weird issues Still having trouble with closing single quotes for some reason. (https://i.imgur.com/w2bUVKS.png) I double checked the ROM and I do indeed have$66 in the dictionary. I also have an entry with "!’" but that has Apostrophes ($67) work fine though Also (https://i.imgur.com/VrR1E2T.png) Not sure what's going on here. There's a couple of windows like this with the oddly premature uncalled for line break. I'm assuming the window counts how many bits are used per line as opposed to individual letters/spaces/characters? Any suggestions? Or is it just a weird game programming/abcde limitation? Title: Re: General NES Hacking Questions Post by: abw on March 11, 2019, 08:03:16 pm Hurray! The proper choice of dictionary entries makes a huge difference to the compression ratio. Just make sure you leave the first 5 end tokens and the 4 "C" table switches alone (since the game cares about those) and fill out the rest of the main 5-bit table with your 23 best tokens. As for your issues, it's kind of hard to say for sure without seeing the files. If dumping the inserted data didn't work but the game still displays it correctly, then something is wrong with your dump script. For the closing single quotes, my next guess would be an error in the dictionary lengths, though a length error would also mess up all the dictionary entries following the one with the wrong length. The random line breaks is definitely strange too, and not an issue I've come across myself; I've definitely inserted longer strings than that without getting inappropriate line breaks. I haven't looked at the code for it, but just based on observation, DW2's line wrapping algorithm seems pretty solid. Any chance it always happens around the words "What" or "is"? If so, that could indicate that you've got a line break instead of a space in one of the dictionary entries - just a guess! Title: Re: General NES Hacking Questions Post by: Choppasmith on March 16, 2019, 06:27:57 pm I got it! I had used Pointer Tables to extract, edit, and insert the dictionary entries. Yeah, I know it's clunky, but easy for me. It was changing blank spaces (5F) to [no voice] (59). Once I changed it in the ROM it was fine. Similar problem with the closing single quotes and I fixed that too. I was hoping that this could've shrunk the script size, but, alas, it didn't. Oh well, more editing work for me! Congrats on your Latin translation release, abw! I had looked at the Read Me and you said one of your improvements was editing the main party status windows to show (was this battle or field or both?) full character names instead of just showing the first four letters. When working on DW1 menus, the thought of doing that for later entries struck me as a "would be nice if possible". You mentioned it was more than a one byte hack. Were there other entries for the window you had to change besides length? Or is it like monster/item/spell/etc names where you had to find that value that affected display length? Btw, I know it'd be easy to tell the magic byte that affects monster name length like DW1, but you did explain how you did it and I have every intention to try myself as part of my learning. Title: Re: General NES Hacking Questions Post by: abw on March 17, 2019, 05:53:07 pm I got it! Congrats! Congrats on your Latin translation release, abw! I had looked at the Read Me and you said one of your improvements was editing the main party status windows to show (was this battle or field or both?) full character names instead of just showing the first four letters. When working on DW1 menus, the thought of doing that for later entries struck me as a "would be nice if possible". You mentioned it was more than a one byte hack. Were there other entries for the window you had to change besides length? Or is it like monster/item/spell/etc names where you had to find that value that affected display length? Thanks! If you check the screenshots, you'll see that full character names are displayed both in and out of battle; I think I made changes for displaying full names in a total of 37 different menus, which typically also involved widening and repositioning them and sometimes also involved making updates to the menu wiping process; the WEAPON/ARMOR/SHILD/HELMET menus were particularly irritating to deal with in that respect. For the code changes, the basic problem is that DW2 stores the first 4 characters of each hero's name in one spot and the last 4 characters in a completely different spot (probably due to the English version's extra 4 characters being bolted on to the original 4 characters present in the Japanese version), so you can't just find the code that says "read 4 bytes from X" and update one byte to say "read 8 bytes from X", you need to add more code (which needs more space) to read 4 bytes from X and glue them together with 4 bytes from Y. For the later games in the series, the mini status menu was rearranged into 1 column per hero instead of 1 row per hero, so widening the columns probably wouldn't work out very well due to lack of screen space if everybody has a long name, but you could switch the layout back to rows if you wanted. It's just a question of how much work would be involved. Btw, I know it'd be easy to tell the magic byte that affects monster name length like DW1, but you did explain how you did it and I have every intention to try myself as part of my learning. Trace logger is your friend there too! Title: Re: General NES Hacking Questions Post by: Choppasmith on April 16, 2019, 03:08:52 pm Basck from vacation and ready to tackle this! Okay, I'm having a strange pointer issue (at least I think it's Pointer related). I got my script (with around 19 bytes to spare!) When the game loads the intro, instead of loading the line "This is the royal castle of the kingdom of Moonbrooke." which is Pointer 7, String 14 it loads instead, "‘We shall await thy return most eagerly!’" Which is a couple lines up at String 12. Granted I removed the "Hold Reset" dialog to save space but I double checked and I don't see any missing end-FC marks I might've missed that would cause the issues. What's weird is, in the next pointer, the first two lines load just fine but everything after is mixed up. Did I mess up the formatting of my atlas.txt somehow? It's right here http://www.mediafire.com/file/e7kqqhf720plzpf/atlas.txt/file (note because of the trouble Chicken Knife had with his script mentioned earlier in this thread, I opted to extract the script with comments:off so it's just the new dialog) I know I read your post, abw, a couple pages back about the ROM flaw that pointers could get mixed up, but I'm not sure I understand what you mean by checking them. In DW1, I could load up the ROM in Windhex, look at the start of the string, the address, then compare it with the pointer, but I'm not sure how I can do that with DW2's compression. Title: Re: General NES Hacking Questions Post by: darthvaderx on April 16, 2019, 05:26:13 pm Is it possible to run Lua Script in Mesen in the same way as in FCEUX? I've tried to run Metroid in every way but none worked. Title: Re: General NES Hacking Questions Post by: abw on April 16, 2019, 09:56:29 pm When the game loads the intro, instead of loading the line "This is the royal castle of the kingdom of Moonbrooke." which is Pointer 7, String 14 it loads instead, "‘We shall await thy return most eagerly!’" Which is a couple lines up at String 12. I'm not seeing anything immediately obvious that would cause that kind of error, and a quick test with a version of the original table file tweaked to include your new characters gets me the expected text displayed during the entire intro sequence, so I suspect your problem lies elsewhere. Getting an earlier string from the same pointer makes me think the game might be counting more end tokens than you want it to... what does your table file look like? If you altered any of the first 5 entries, that might explain this behaviour. I know I read your post, abw, a couple pages back about the ROM flaw that pointers could get mixed up, but I'm not sure I understand what you mean by checking them. Basically you just pop open the ROM in a hex editor, look at 0xB772-0xB7D0, and check whether any of those pointers point to$BFD6 or $BFD7. If they do, then you'll need to move the auto-jump point (0x17FE7) back a byte or two in order to force the affected pointer to point into the second script bank instead of letting it read non-script data from the first script bank and make a mess of your text. Title: Re: General NES Hacking Questions Post by: Choppasmith on April 18, 2019, 04:42:11 pm I'm not seeing anything immediately obvious that would cause that kind of error, and a quick test with a version of the original table file tweaked to include your new characters gets me the expected text displayed during the entire intro sequence, so I suspect your problem lies elsewhere. Getting an earlier string from the same pointer makes me think the game might be counting more end tokens than you want it to... what does your table file look like? If you altered any of the first 5 entries, that might explain this behaviour. Basically you just pop open the ROM in a hex editor, look at 0xB772-0xB7D0, and check whether any of those pointers point to$BFD6 or $BFD7. If they do, then you'll need to move the auto-jump point (0x17FE7) back a byte or two in order to force the affected pointer to point into the second script bank instead of letting it read non-script data from the first script bank and make a mess of your text. Okay that went better than I hoped! :laugh: So, when you suggested the problem (thanks for that, it's easy for me to overlook little things like that), it was indeed something I did with my table/dictionary. I thought I'd swap the standalone [FF] with the standalone Period that was further down. Once I swapped it back, the intro text displayed fine again. Lesson learned. Adding a note in my copy of the table: DO NOT MESS WITH THE FIRST 5 VALUES! Only problem was I went from just being under the limit by about 20 bytes, to being OVER the limit by about 60. So I had to scour the dictionary again to see if there was some way to get the size back down. After adding a space to the comma entry and replacing the unused ; by realizing I had a lot of words and phrases that ended in "eth" and boom that was enough to put me WELL under with about 140 bytes to spare. WOW! I'm ecstatic! I should be able to restore the little bit I had to cut (mostly consists of long gratuitous laughs, pauses, and a post-final boss NPC line). Anyway when I asked about changing the second text bank at 3FA94, I figured having the second block at$3400F would make it Bank 10 (0A) but when you replied with an explanation of how the bank swapping work, you mentioned Bank 12 and I'm not sure if you were just giving an unrelated example, subtly correcting me, or just made a typo. I'm getting some messed up text by a couple of the Midenhall Castle NPCs using either value (either blank windows or a sparse random letter) so i figured I'll have to look at those pointers, but I want to make sure I'm not doing this with the wrong bank value. EDIT: These are my values between b772-B7D0 16 90 73 92 7D 92 87 92 91 92 9B 92 A5 92 AF 92 B9 92 02 94 19 95 3E 96 12 98 37 99 12 9C 62 9E 35 A1 0A A4 CE A7 AB AB 5E AF 68 AF 72 AF 7C AF 86 AF 42 B4 37 BB 43 02 B1 07 93 0E 4A 13 FC 17 7D 1B 88 1E 1C 22 54 27 A0 2D 9D 31 16 35 0A 39 DF B4 EB A0 4D BB 17 F4 F9 C3 E2 57 C0 E7 8F 9D So, nothing wrong there, then? Edit 2: So am I understating right that my problem is probably between Pointer 32 and 35 which is pointer B7A2 and B7A8 respectively? Since B7A8 points to the new bank? Pointer 32 has a lot of the Midenhall dialog, some of which is displayed properly. Sorry to say, I'm still puzzled here. Title: Re: General NES Hacking Questions Post by: abw on April 19, 2019, 12:01:43 am Okay that went better than I hoped!  :laugh: [...] Adding a note in my copy of the table: DO NOT MESS WITH THE FIRST 5 VALUES! Hurray! To be fair, I did actually mention that a few posts back (http://www.romhacking.net/forum/index.php?topic=27053.msg372393#msg372393), but I'm glad you've got it working now! Anyway when I asked about changing the second text bank at 3FA94, I figured having the second block at $3400F would make it Bank 10 (0A) but when you replied with an explanation of how the bank swapping work, you mentioned Bank 12 and I'm not sure if you were just giving an unrelated example, subtly correcting me, or just made a typo. Heh, that one I managed to write down correctly for a change :P. Banks$0C (starting at 0x030010), $0D (0x034010), and$0E (0x038010) are DW2's 3 empty banks. I'm getting some messed up text by a couple of the Midenhall Castle NPCs using either value (either blank windows or a sparse random letter) so i figured I'll have to look at those pointers, but I want to make sure I'm not doing this with the wrong bank value. Yup, that makes sense. The main script's pointer table actually starts at 0xB762, so you're missing the first 8 pointers, but they're probably fine so I'm not too concerned about that. The pointer table also stops at 0xB7C1, so the other 16 bytes you included here are just the start of the original second script bank's text and aren't really relevant. If you look at the rest of the pointer values, you'll see they keep increasing from $9016 at 0xB772 up to$BB37 at 0xB7A6 and then reset (that'll be the first pointer following the auto-jump) to $0243 at 0xB7A8 and start increasing again up to$390A at 0xB7C0, so those pointer values are all off by $8000 and giving you who knows what from system RAM instead of the data you want from cartridge ROM (as an added bonus, the$2002 PPU register is sensitive to reads, so if you're really lucky, you might also get some other graphical mayhem happening :D). Checking your insert script, I see that you've updated where to jump to when the insertion point reaches 0x17FE7 but didn't update the corresponding header value, which is throwing the pointer value calculations off. Try changing that to #AUTOCMD($17FE7, #HDR($28010)) instead and you should get the right pointer values. Title: Re: General NES Hacking Questions Post by: Choppasmith on April 19, 2019, 12:05:26 pm It occurred to me just before reading your reply that it was probably the pointer values being off for the new bank (you did post that "formula" for pointer values a while back). Had no idea that was what the HDR command in the script was for. Lesson learned for the future, but yep, that did it, it's working beautifully. Just got to start testing and tweaking and then doing all the menu stuff. Man that was nuts but that's a huge hurdle cleared for me thank you so much! :beer: Edit: Chicken Knife gave me his atlas.txt so I could compare translations and I realized you have the "pointer formula" right in there. Doh! XP Title: Re: General NES Hacking Questions Post by: abw on April 20, 2019, 12:51:11 pm Lesson learned for the future, but yep, that did it, it's working beautifully. Just got to start testing and tweaking and then doing all the menu stuff. Man that was nuts but that's a huge hurdle cleared for me thank you so much! :beer: Sweet! Let's hope the testing and tweaking goes well too :). Title: Re: General NES Hacking Questions Post by: Choppasmith on April 22, 2019, 01:54:37 pm Okay sorry to post in this thread so soon, but I'm genuinely stumped by the plural rules (I foolishly thought it was a bunch of suffixes in a row, kinda like the dictionary)  . With new monster names need a couple of tweaks not covered by the current plural rules New names Mummy Boy > Boys Hunter Mech > Mechs Dragonfry > No change Gigantes > No change (though I'm not 100% sure if a plural is even needed here, I don't think Gigantes are encountered in groups) Cyclops > No change (actually I just read here (https://www.grammarphobia.com/blog/2013/08/cyclops.html) that Square-Enix goofed and that the correct plural for a cyclops is "Cyclopes", though I suppose if need be, Cyclopses isn't entirely wrong and off limits) Technically Man o' War is listed as "Men o' War" as its plural but I figure with the way the system counts from the end and adds to the end Man o' Wars would be just fine The good news is that the new names also free up the need for a few rules ch > ches (only the Hunter Mech is used and that's a case where it only needs the s added) ngo > no change f > ves (unused anyway) i > ies Mouse/mouse > Mice/mice (they're rats now) rus>rii sh>shes I mean I get that there's essentially two parts of that code. One looks at the end letters and then points to other part that removes/adds letters at the end. Even for something like editing the "ngo" if I wanted to change it to "fry" I only see the "n" and "g" to change, I don't see the o Code: [Select] ; -o pluralization handler: -ngo -> -ngo, -o -> -os 0x01C845|$07:$8835:BD F0 60 LDA $60F0,X ; read second-last letter of monster name 0x01C848|$07:$8838:C9 10 CMP #$10 ; "g" 0x01C84A|$07:$883A:D0 E7    BNE $8823 ; if not -go, append "s" 0x01C84C|$07:$883C:BD EF 60 LDA$60EF,X ; read third-last letter of monster name 0x01C84F|$07:$883F:C9 17    CMP #$17 ; "n" 0x01C851|$07:$8841:D0 E0 BNE$8823 ; if not -ngo, append "s" 0x01C853|$07:$8843:F0 EB    BEQ $8830 ; if -ngo, plural = singular I guess what really confuses me are the if/if not suffix lines. How do I change THOSE? Edit: Also doing Menus (no trouble understanding those :) ) but are you sure those pointers you listed on the wiki are right, abw? I'm getting some really broken up strings like Nevermind, I misread the text start value as 7656 instead of 76E6 which was throwing me off. Title: Re: General NES Hacking Questions Post by: abw on April 22, 2019, 08:20:23 pm Okay sorry to post in this thread so soon, but I'm genuinely stumped by the plural rules (I foolishly thought it was a bunch of suffixes in a row, kinda like the dictionary) . Nope, the pluralization rules are implemented as actual code, not just a table of suffixes or similar, so the bad news is you're going to have to roll up your sleeves and dig in to it a bit in order to make your changes. On the other hand, since you're going to be making changes anyway, you're free to make all the changes you want - you've got all the space from 0x01C801 to 0x01C8F3 to do whatever you want, and if that isn't enough space, most of the rest of the bank is empty. When this code starts running, the singular monster name has already been laid out in RAM at$6119 for you, and by the end, you just need to write the plural form to RAM starting at $60F1 and SEC before you RTS. You can use the debugger to set a breakpoint at the start and step through the code to watch exactly what it does. For -ngo specifically, it works like this:$87FC: LDA $60F1,X ; read the final letter of the monster name into A (in this case that's "o", a.k.a. #$18; I think this is the part you missed) $87FF: CMP #$18       ; among other things, this sets the Z (zero) processor flag based on whether A is #$18 ("o") or not$8801: BEQ $8835 ; since Z is set, BEQ follows the branch ; at this point we know the last letter was "o", so we'll check the second-last letter$8835: LDA $60F0,X ; read the second-last letter the of monster name into A (in this case that's "g", a.k.a. #$10) $8838: CMP #$10       ; as before, this sets the Z flag based on whether A is #$10 ("g") or not$883A: BNE $8823 ; since A is #$10, Z is not set and we don't take this branch ; at this point we know the last two letters were "go", so we'll check the third-last letter $883C: LDA$60EF,X    ; read the third-last letter of the monster name (in this case that's "n", a.k.a. #$17)$883F: CMP #$17 ; as before, this sets the Z flag based on whether A is #$17 ("n") or not $8841: BNE$8823      ; since A is #$17, Z is not set and we don't take this branch$8843: BEQ $8830 ; ... which means we do take this branch$8830: SEC            ; the calling code cares about whether C is set or not $8831: RTS ; and at this point we're done, not having changed a single byte of the monster name Title: Re: General NES Hacking Questions Post by: Choppasmith on April 22, 2019, 10:51:19 pm Nope, the pluralization rules are implemented as actual code, not just a table of suffixes or similar, so the bad news is you're going to have to roll up your sleeves and dig in to it a bit in order to make your changes. On the other hand, since you're going to be making changes anyway, you're free to make all the changes you want - you've got all the space from 0x01C801 to 0x01C8F3 to do whatever you want, and if that isn't enough space, most of the rest of the bank is empty. When this code starts running, the singular monster name has already been laid out in RAM at$6119 for you, and by the end, you just need to write the plural form to RAM starting at $60F1 and SEC before you RTS. You can use the debugger to set a breakpoint at the start and step through the code to watch exactly what it does. For -ngo specifically, it works like this:$87FC: LDA $60F1,X ; read the final letter of the monster name into A (in this case that's "o", a.k.a. #$18; I think this is the part you missed) $87FF: CMP #$18       ; among other things, this sets the Z (zero) processor flag based on whether A is #$18 ("o") or not$8801: BEQ $8835 ; since Z is set, BEQ follows the branch ; at this point we know the last letter was "o", so we'll check the second-last letter$8835: LDA $60F0,X ; read the second-last letter the of monster name into A (in this case that's "g", a.k.a. #$10) $8838: CMP #$10       ; as before, this sets the Z flag based on whether A is #$10 ("g") or not$883A: BNE $8823 ; since A is #$10, Z is not set and we don't take this branch ; at this point we know the last two letters were "go", so we'll check the third-last letter $883C: LDA$60EF,X    ; read the third-last letter of the monster name (in this case that's "n", a.k.a. #$17)$883F: CMP #$17 ; as before, this sets the Z flag based on whether A is #$17 ("n") or not $8841: BNE$8823      ; since A is #$17, Z is not set and we don't take this branch$8843: BEQ $8830 ; ... which means we do take this branch$8830: SEC            ; the calling code cares about whether C is set or not $8831: RTS ; and at this point we're done, not having changed a single byte of the monster name Thanks that explains it a little better. I now have an idea of what to do, I just don't know the commands. I realize fixing "Mechs" would be the easiest thing, I would just have to remove the line that checked the second to last letter for a c$8875-8878 But I looked at your notes on the Wiki and sort of brainstormed what I WANT to do. Fixing S rules for Gigantes, Magus Code: [Select] ; read second-last letter of monster name ; "u" ; if not -us, append "es" ; read third-last letter of monster name ; "g" ; if not -gus, append "es" ; read second-last letter of monster name ; "e" ; if not -es, append "es" ; read third-last letter of monster name ; "t" ; if -tes, don't change $8830 ; "i" ; replace -us with -i ; Go to$8830 ; replace final letter with "i" ; append "es" Fixing Mummy Boy and Dragonfry Code: [Select] ; "i" ; read second-last letter of monster name ; "o" ; if "-oy", append "s" ; "r" ; if "-ry", don't change ($8830) Fixing Madusa Code: [Select] ; read second-last letter of monster name ; "s" ; if not -sa, append "s" ; read third-last letter of monster name ; "u" ; "e"append e ; Go to$8830 It's something like that right? Again I get the programming gist of it, but I'm not sure what hex notation does what. On a different note, thankfully window editing has gone quite well. I can see what you're talking about a few posts ago with the US "addition" of extra names. By expanding the Battle Command window, I realized I could display full names at the top of the window now, but strangely any unused character spaces still "cover" the top bar of the window. That's a real shame, otherwise it's neat! I'm thinking of keeping the change. Edit: So as brought to my attention by laser lambert, apparently the special [(s)] (F2) doesn't work the same way as DW1. If used for "points", whenever there's 1, it just cuts off the string (experience) or doesn't display the whole window at all (points of damage in battle). Don't want to be a bother (I really really don't I swear! ^^'') But even if I used the Trace Logger/Debugger to find the routine I wouldn't know what to look for, I'm mostly just curious, but is there any way to fix that? Title: Re: General NES Hacking Questions Post by: abw on April 24, 2019, 12:06:32 am When I was doing this for Latin, I took my list of singular monster names (I excluded the unique bosses since they never need plural forms), sorted it right-to-left so that I had all the same suffixes grouped together, wrote out the correct plural form for each name, did some extra Latin-specific stuff you won't need to deal with for English, and then wrote out the singular -> plural transformation rules I needed. Once I had that, writing the code was fairly straightforward; it's a little bit long and boring, but the individual sections are pretty simple. This is an oversimplification (particularly for CMP, which does multiple other things simultaneously), but for the purposes of this exercise, you can treat CMP #$hex as "check if A is #$hex", BEQ $addr as "if it is, go to$addr", and BNE $addr as "if it isn't, go to$addr". As with script editing, I recommend writing code as code rather than typing things in to a hex editor, so you'll want to grab an assembler (anything that works for 65816 should also work for 6502, so pick your favourite SNES or NES assembler; I was actually using Asar (http://www.romhacking.net/utilities/964/), but I've been meaning to give xkas-plus (http://www.romhacking.net/forum/index.php?topic=19640.0) a try), and if you haven't done any ASM before, you'll also want some reference material (I like the handy instruction descriptions in chapter 18 of Programming the 65816 (http://www.romhacking.net/documents/423/), but feel free to consult other resources if that one doesn't suit your style). Also, if you get the chance to be clever, take it! E.g. if you only have one monster name that ends in "r" and you're writing the code to handle names that end in "r", you don't have to check that the 2nd-last letter is "a" or the 3rd-last letter is "W" or anything else, you can immediately just change the 9th-last letter to "e" to get "Man o' War" pluralized to "Men o' War". By expanding the Battle Command window, I realized I could display full names at the top of the window now, but strangely any unused character spaces still "cover" the top bar of the window. That's a real shame, otherwise it's neat! I'm thinking of keeping the change. Check out the screenshots from my translation - I also added some code for changing trailing spaces in hero names to top borders when the names are displayed as part of a border :P. Edit: So as brought to my attention by laser lambert, apparently the special [(s)] (F2) doesn't work the same way as DW1. If used for "points", whenever there's 1, it just cuts off the string (experience) or doesn't display the whole window at all (points of damage in battle). Don't want to be a bother (I really really don't I swear! ^^'') But even if I used the Trace Logger/Debugger to find the routine I wouldn't know what to look for, I'm mostly just curious, but is there any way to fix that? Much like the [name] control code, [(s)] just tests the 16-bit value at $8F; the code for that is actually right before the pluralization code: Code: [Select] ; data -> code ; if$8F-$90 == #$0001, print "s" + [end-FA] to $60F1 and SEC, else print [end-FA] and CLC ; indirect control flow target ; from$02:$BE37 via$8006 0x01C7E8|$07:$87D8:A5 90    LDA $90 0x01C7EA|$07:$87DA:D0 05 BNE$87E1 ; if $90 > 0, add "s" 0x01C7EC|$07:$87DC:A4 8F LDY$8F 0x01C7EE|$07:$87DE:88      DEY 0x01C7EF|$07:$87DF:F0 0C    BEQ $87ED ; if$90 == 0 and $8F - 1 == 0 (i.e.$8F == 1), do not add "s" ; control flow target (from $87DA) 0x01C7F1|$07:$87E1:A9 1C LDA #$1C ; "s" 0x01C7F3|$07:$87E3:8D F1 60 STA $60F1 0x01C7F6|$07:$87E6:A9 FA LDA #$FA ; [end-FA] 0x01C7F8|$07:$87E8:8D F2 60 STA $60F2 0x01C7FB|$07:$87EB:38 SEC 0x01C7FC|$07:$87EC:60 RTS ; control flow target (from$87DF) 0x01C7FD|$07:$87ED:A9 FA    LDA #$FA ; [end-FA] 0x01C7FF|$07:$87EF:18 CLC 0x01C800|$07:$87F0:60 RTS The [number] control code, which is used for both experience and gold gains, also uses$8F-$90, so I would have expected [(s)] to work for both of those situations. Title: Re: General NES Hacking Questions Post by: Choppasmith on April 24, 2019, 06:41:42 am When I was doing this for Latin, I took my list of singular monster names (I excluded the unique bosses since they never need plural forms), sorted it right-to-left so that I had all the same suffixes grouped together, wrote out the correct plural form for each name, did some extra Latin-specific stuff you won't need to deal with for English, and then wrote out the singular -> plural transformation rules I needed. Once I had that, writing the code was fairly straightforward; it's a little bit long and boring, but the individual sections are pretty simple. This is an oversimplification (particularly for CMP, which does multiple other things simultaneously), but for the purposes of this exercise, you can treat CMP #$hex as "check if A is #$hex", BEQ$addr as "if it is, go to $addr", and BNE$addr as "if it isn't, go to $addr". As with script editing, I recommend writing code as code rather than typing things in to a hex editor, so you'll want to grab an assembler (anything that works for 65816 should also work for 6502, so pick your favourite SNES or NES assembler; I was actually using Asar (http://www.romhacking.net/utilities/964/), but I've been meaning to give xkas-plus (http://www.romhacking.net/forum/index.php?topic=19640.0) a try), and if you haven't done any ASM before, you'll also want some reference material (I like the handy instruction descriptions in chapter 18 of Programming the 65816 (http://www.romhacking.net/documents/423/), but feel free to consult other resources if that one doesn't suit your style). Also, if you get the chance to be clever, take it! E.g. if you only have one monster name that ends in "r" and you're writing the code to handle names that end in "r", you don't have to check that the 2nd-last letter is "a" or the 3rd-last letter is "W" or anything else, you can immediately just change the 9th-last letter to "e" to get "Man o' War" pluralized to "Men o' War". Thanks. I was worried my post came across too "Wahh I don't wanna! Just do it for me please." which wasn't what I intended at all. But that's exactly the info I was hoping for! Quote Check out the screenshots from my translation - I also added some code for changing trailing spaces in hero names to top borders when the names are displayed as part of a border :P. Ah, you sly dog! Guess I'll have to download your translation and see what you did. Speaking of. I was earnestly going to try to find monster and spell name length and used your explanation in that previous DW1 thread. But looking it over it confused me more than I thought it would. In your example you said I could look up HEAL by setting a Read Breakpoint at$B5E6. The thing is looking back at the rom, I can't see what B5E6 is supposed to be. The pointer value was something similar but different like B8B6 (sorry away from my computer right now, something like that). Am I missing a real obvious conversion or something? I thought the magic RAM to ROM conversion number was 3FF0? Quote Much like the [name] control code, [(s)] just tests the 16-bit value at $8F; the code for that is actually right before the pluralization code: Code: [Select] ; data -> code ; if$8F-$90 == #$0001, print "s" + [end-FA] to $60F1 and SEC, else print [end-FA] and CLC ; indirect control flow target ; from$02:$BE37 via$8006 0x01C7E8|$07:$87D8:A5 90    LDA $90 0x01C7EA|$07:$87DA:D0 05 BNE$87E1 ; if $90 > 0, add "s" 0x01C7EC|$07:$87DC:A4 8F LDY$8F 0x01C7EE|$07:$87DE:88      DEY 0x01C7EF|$07:$87DF:F0 0C    BEQ $87ED ; if$90 == 0 and $8F - 1 == 0 (i.e.$8F == 1), do not add "s" ; control flow target (from $87DA) 0x01C7F1|$07:$87E1:A9 1C LDA #$1C ; "s" 0x01C7F3|$07:$87E3:8D F1 60 STA $60F1 0x01C7F6|$07:$87E6:A9 FA LDA #$FA ; [end-FA] 0x01C7F8|$07:$87E8:8D F2 60 STA $60F2 0x01C7FB|$07:$87EB:38 SEC 0x01C7FC|$07:$87EC:60 RTS ; control flow target (from$87DF) 0x01C7FD|$07:$87ED:A9 FA    LDA #$FA ; [end-FA] 0x01C7FF|$07:$87EF:18 CLC 0x01C800|$07:$87F0:60 RTS The [number] control code, which is used for both experience and gold gains, also uses$8F-$90, so I would have expected [(s)] to work for both of those situations. Thanks again for this. How strange but I guess that's why the table only uses it for Gold and nothing else, maybe the devs just couldn't get it working properly? In a way I don't find it with fretting over. I could just tweak the script and then change my "point(s)" dictionary entry and be able to save MORE space that way. Title: Re: General NES Hacking Questions Post by: abw on April 24, 2019, 08:09:25 pm Thanks. I was worried my post came across too "Wahh I don't wanna! Just do it for me please." which wasn't what I intended at all. But that's exactly the info I was hoping for! We aim to please ;D. Ah, you sly dog! Guess I'll have to download your translation and see what you did. You won't be able to completely copy my code since I cannibalized the free space I created by shortening "ADVENTURE LOG" to "VOLUMEN", but the ASM for handling menu control codes$98 - $9F starts at 0x3ED8A; the original game ran the same code for all of$9A - $9F, so I stole$9B - $9F for the "names in border" code. Speaking of. I was earnestly going to try to find monster and spell name length and used your explanation in that previous DW1 thread. But looking it over it confused me more than I thought it would. In your example you said I could look up HEAL by setting a Read Breakpoint at$B5E6. The thing is looking back at the rom, I can't see what B5E6 is supposed to be. The pointer value was something similar but different like B8B6 (sorry away from my computer right now, something like that). Am I missing a real obvious conversion or something? I thought the magic RAM to ROM conversion number was 3FF0? Sorry, part of the problem here is that I can't type properly :'(. I've updated that post to use the correct breakpoint of $BE56 as shown in the debugger snapshot instead of$B5E6; after that, it goes back to the relationship between RAM and ROM addresses: the read breakpoint uses RAM address $BE56, which for DW1 corresponds to a ROM offset of 0x3E66 (not the address you're looking for), 0x7E66 (pick me because I say "HEAL"!), or 0xBE66 (also not the address you're looking for), or theoretically 0xFE66 (but this is reserved for the fixed bank, so it's also not the address you're looking for). Does it make any more sense like this? DW1 ROM Offset DW1 RAM Address DW1 RAM to ROM Conversion Number$0010 - $400F$8000 - $BFFF -$7FF0 $4010 -$800F      $8000 -$BFFF     -$3FF0$8010 - $C00F$8000 - $BFFF +$10 $C010 -$1000F     $C000 -$FFFF     +$4010 Thanks again for this. How strange but I guess that's why the table only uses it for Gold and nothing else, maybe the devs just couldn't get it working properly? In a way I don't find it with fretting over. I could just tweak the script and then change my "point(s)" dictionary entry and be able to save MORE space that way. Wait a minute... Am I missing something, or is getting 1 piece of gold in this game actually impossible? Slimes are worth 2 gold, unique saleable items go for 2 gold, and even buying an Antidote Herb with a Golden Card still gets you a discount of 2 gold... Hmm, yup, that F2 is just plain broken when$8F-$90 is #$0001. Instead of Code: [Select] LDA #$FA CLC RTS which as you've seen results in the rest of the string getting cut off, it should be doing Code: [Select] LDA #$FA STA $60F1 SEC RTS So the good news is: it's not you, it's the game :P. Replacing the CLC/RTS with BNE$87E3 should be enough to fix that up. Title: Re: General NES Hacking Questions Post by: Choppasmith on April 25, 2019, 02:47:18 pm Okay, really embarrassing question but I can't seem to open the file in either program you recommended. Asar seems to ONLY accept SNES roms and xkas plus seems to only accept dissembled code. Since you've said you've been using the former, I feel like I'm missing something incredibly obvious. Edit: Also, I thought I was getting your explanation of Read Breakpoints. I thought to find Monster Length I'd place a Read Breakpoint for Slime which the Pointer points to as memory at B718 (to avoid possible confusion I'll just say that with Chicken Knife sharing his atlas script I thought I'd use the work you guys did in moving the Item and Monster names). I figured it'd have to be CPU memory because PPU memory doesn't let me put in B718 (I assume 3ff0 is the maximum?) And yet when I try to go in to a battle with a Slime, the Debugger doesn't trip anything. And yes I made sure it's enabled before you ask :p It's an easy thing to miss, so I wouldn't blame you if you asked. Title: Re: General NES Hacking Questions Post by: abw on April 25, 2019, 05:43:04 pm Okay, really embarrassing question but I can't seem to open the file in either program you recommended. Asar seems to ONLY accept SNES roms and xkas plus seems to only accept dissembled code. Since you've said you've been using the former, I feel like I'm missing something incredibly obvious. Asar assumes a SNES ROM and memory model by default, but you can get it to work with NES ROMs by flipping a couple of switches. Since it doesn't come with much in the way of examples, give this a try: Quote from: test.asm ; Example NES 6502 ASM file: writes a small infinite loop. ; Put this file in the same directory as asar and execute it with e.g. ;   copy /Y nul test.bin ;   asar -nocheck test.asm test.bin ; After that, test.bin should contain 64 KB of #$00 followed by "A9 00 4C 00 80" norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$10010   ; set the ROM file insertion point to 0x10010 base $8000 ; set the starting RAM address to$8000 loop: LDA #$00 JMP loop Edit: Also, I thought I was getting your explanation of Read Breakpoints. I thought to find Monster Length I'd place a Read Breakpoint for Slime which the Pointer points to as memory at B718 (to avoid possible confusion I'll just say that with Chicken Knife sharing his atlas script I thought I'd use the work you guys did in moving the Item and Monster names). I figured it'd have to be CPU memory because PPU memory doesn't let me put in B718 (I assume 3ff0 is the maximum?) And yet when I try to go in to a battle with a Slime, the Debugger doesn't trip anything. And yes I made sure it's enabled before you ask :p It's an easy thing to miss, so I wouldn't blame you if you asked. Yeah, unless you're specifically looking for graphics stuff, a CPU breakpoint is probably what you want. The original monster list was at 0x1B728 a.k.a.$06:$B718, but I moved my monster list to 0x1D050, a.k.a.$07:$9040, so if you're looking for Slime, that's where it'll be. Title: Re: General NES Hacking Questions Post by: Choppasmith on April 27, 2019, 07:40:19 pm Asar assumes a SNES ROM and memory model by default, but you can get it to work with NES ROMs by flipping a couple of switches. Since it doesn't come with much in the way of examples, give this a try:Yeah, unless you're specifically looking for graphics stuff, a CPU breakpoint is probably what you want. The original monster list was at 0x1B728 a.k.a.$06:$B718, but I moved my monster list to 0x1D050, a.k.a.$07:$9040, so if you're looking for Slime, that's where it'll be. I guess I figured what was there at the pointer would still work. Thanks. Well I got the game to stop and saw 0F:F47B:8D A0 60 STA$60A0 = #$0B And thanks to laserlambert's testing I know that the line 1 of monster names gets cut off at 11 letters, so I figure that's GOTTA be it right? Am I right in thinking maybe this ISN'T a hardcoded value but something that's loaded in memory? I also know that the second line for monsters is limited to 9 letters which is at least 1 letter short for my new monster names and even when trying to make a breakpoint based on the Monster Line 2 pointer, I got a break but couldn't find anythign resembling a 9 letter limit. I even went BACK to DW1 and tried to recreate the steps you did in finding the length limit of HEAL. made a breakpoint and had it stop when casting and found 01:A868:AE E2 64 LDX$64E2 = #$0F Note this is from my hack, so i figured this would have to be the new 15 letter limit for spells. If so, why does this use LDX? And how did you turn that into the ROM address of$77E9? Or am I just not looking at the right thing at all? Title: Re: General NES Hacking Questions Post by: abw on April 28, 2019, 09:21:17 am This sounds okay as far as it goes, but you haven't hit ROM yet, so you need to keep following the trail a bit further. Once you find the value you're looking for being read from somewhere in the $8000 -$FFFF range, then you can stop and convert the RAM address to a ROM address (/ get FCEUX to do it for you if you want). 0F:F47B:8D A0 60  STA $60A0 = #$0B This shows that the game is about to store whatever the current value of A is to $60A0 (which was #$0B just before that instruction executed), so you'll need to find out where $60A0 became #$0B in the first place. 01:A868:AE E2 64  LDX $64E2 = #$0F Similarly, this shows that the game is loading X with the value of $64E2, which happens to be #$0F; you'll need to find out how $64E2 became #$0F. Title: Re: General NES Hacking Questions Post by: Choppasmith on May 04, 2019, 06:40:23 pm Asar assumes a SNES ROM and memory model by default, but you can get it to work with NES ROMs by flipping a couple of switches. Since it doesn't come with much in the way of examples, give this a try: Sorry, I did this and ran it, it made the bin file but I'm still getting the "Not an SNES ROM Error" You might have to dumb it down even more for me. (9_6) This sounds okay as far as it goes, but you haven't hit ROM yet, so you need to keep following the trail a bit further. Once you find the value you're looking for being read from somewhere in the $8000 -$FFFF range, then you can stop and convert the RAM address to a ROM address (/ get FCEUX to do it for you if you want). This shows that the game is about to store whatever the current value of A is to $60A0 (which was #$0B just before that instruction executed), so you'll need to find out where $60A0 became #$0B in the first place. Similarly, this shows that the game is loading X with the value of $64E2, which happens to be #$0F; you'll need to find out how $64E2 became #$0F. Okay so I used trace logger to log the data from the world map to the start of a battle to the breakpoint. Still not seeing what I want. I tried to put an Execution Breakpoint on 60A0, but I don't think I'm doing it right. When doubleclicking in the debugger it just gives me "K==#00" as the "condition" and when I try to change it I get an invalid condition. Really sorry about this. It's hard not to feel like that guy who was trying to translate DW1 into Spanish. It's frustrating to feel so clueless, but this is uncharted territory for me and I'm determined to pick up SOMETHING new for later games. On another note, the Assembly guide you posted is really handy, thanks! It's nice to know what all those 3 letter terms mean. Title: Re: General NES Hacking Questions Post by: abw on May 15, 2019, 06:06:35 pm Sorry for the delay in responding, I've been offline for the past couple of weeks! Sorry, I did this and ran it, it made the bin file but I'm still getting the "Not an SNES ROM Error" You might have to dumb it down even more for me. (9_6) I'm not sure how much further down I can go :P. Go to the directory containing Asar, copy the sample ASM I provided into a new file named test.asm, make an empty file named test.bin, and then open a command prompt in that directory and run "asar -nocheck test.asm test.bin". Works like a charm for me. Okay so I used trace logger to log the data from the world map to the start of a battle to the breakpoint. Still not seeing what I want. I tried to put an Execution Breakpoint on 60A0, but I don't think I'm doing it right. When doubleclicking in the debugger it just gives me "K==#00" as the "condition" and when I try to change it I get an invalid condition. For $60A0, you'd want a write breakpoint since you're looking for places where the game writes #$0B to $60A0. Execute breakpoints fire when the code at the address you set the breakpoint for gets executed (e.g. the F47B in "0F:F47B:8D A0 60 STA$60A0 = #$0B") and read/write breakpoints fire when the address you set the breakpoint for gets modified by some code (e.g. 60A0 is being written to in "0F:F47B:8D A0 60 STA$60A0 = #$0B"). With the trace log, you don't necessarily need to set any breakpoints; they just help to reduce the size of the log file you need to look through. If I do the same thing as you with an unaltered ROM, making a trace log from the world map to the start of a battle, it's a huge file but I can search it for$B718 (the start of the monster list) to get: Code: [Select] $F422:A0 00 LDY #$00                                     A:18 X:00 Y:00 S:F7 P:nvUBdIzc $F424:AE A0 60 LDX$60A0 = #$0B A:18 X:00 Y:00 S:F7 P:nvUBdIZc$F427:B1 57     LDA ($57),Y @$B718 = #$36 A:18 X:0B Y:00 S:F7 P:nvUBdIzc$F429:C9 FF     CMP #$FF A:36 X:0B Y:00 S:F7 P:nvUBdIzc$F42B:F0 07     BEQ $F434 A:36 X:0B Y:00 S:F7 P:nvUBdIzc$F42D:9D FF 00  STA $00FF,X @$010A = #$5F A:36 X:0B Y:00 S:F7 P:nvUBdIzc$F430:C8        INY                                          A:36 X:0B Y:00 S:F7 P:nvUBdIzc $F431:CA DEX A:36 X:0B Y:01 S:F7 P:nvUBdIzc$F432:D0 F3     BNE $F427 A:36 X:0A Y:01 S:F7 P:nvUBdIzc$F427:B1 57     LDA ($57),Y @$B719 = #$15 A:36 X:0A Y:01 S:F7 P:nvUBdIzc$F429:C9 FF     CMP #$FF A:15 X:0A Y:01 S:F7 P:nvUBdIzc$F42B:F0 07     BEQ $F434 A:15 X:0A Y:01 S:F7 P:nvUBdIzc$F42D:9D FF 00  STA $00FF,X @$0109 = #$5F A:15 X:0A Y:01 S:F7 P:nvUBdIzc$F430:C8        INY                                          A:15 X:0A Y:01 S:F7 P:nvUBdIzc $F431:CA DEX A:15 X:0A Y:02 S:F7 P:nvUBdIzc$F432:D0 F3     BNE $F427 A:15 X:09 Y:02 S:F7 P:nvUBdIzc$F427:B1 57     LDA ($57),Y @$B71A = #$12 A:15 X:09 Y:02 S:F7 P:nvUBdIzc$F429:C9 FF     CMP #$FF A:12 X:09 Y:02 S:F7 P:nvUBdIzc$F42B:F0 07     BEQ $F434 A:12 X:09 Y:02 S:F7 P:nvUBdIzc$F42D:9D FF 00  STA $00FF,X @$0108 = #$5F A:12 X:09 Y:02 S:F7 P:nvUBdIzc$F430:C8        INY                                          A:12 X:09 Y:02 S:F7 P:nvUBdIzc $F431:CA DEX A:12 X:09 Y:03 S:F7 P:nvUBdIzc$F432:D0 F3     BNE $F427 A:12 X:08 Y:03 S:F7 P:nvUBdIzc$F427:B1 57     LDA ($57),Y @$B71B = #$16 A:12 X:08 Y:03 S:F7 P:nvUBdIzc$F429:C9 FF     CMP #$FF A:16 X:08 Y:03 S:F7 P:nvUBdIzc$F42B:F0 07     BEQ $F434 A:16 X:08 Y:03 S:F7 P:nvUBdIzc$F42D:9D FF 00  STA $00FF,X @$0107 = #$5F A:16 X:08 Y:03 S:F7 P:nvUBdIzc$F430:C8        INY                                          A:16 X:08 Y:03 S:F7 P:nvUBdIzc $F431:CA DEX A:16 X:08 Y:04 S:F7 P:nvUBdIzc$F432:D0 F3     BNE $F427 A:16 X:07 Y:04 S:F7 P:nvUBdIzc$F427:B1 57     LDA ($57),Y @$B71C = #$0E A:16 X:07 Y:04 S:F7 P:nvUBdIzc$F429:C9 FF     CMP #$FF A:0E X:07 Y:04 S:F7 P:nvUBdIzc$F42B:F0 07     BEQ $F434 A:0E X:07 Y:04 S:F7 P:nvUBdIzc$F42D:9D FF 00  STA $00FF,X @$0106 = #$5F A:0E X:07 Y:04 S:F7 P:nvUBdIzc$F430:C8        INY                                          A:0E X:07 Y:04 S:F7 P:nvUBdIzc $F431:CA DEX A:0E X:07 Y:05 S:F7 P:nvUBdIzc$F432:D0 F3     BNE $F427 A:0E X:06 Y:05 S:F7 P:nvUBdIzc$F427:B1 57     LDA ($57),Y @$B71D = #$FF A:0E X:06 Y:05 S:F7 P:nvUBdIzc$F429:C9 FF     CMP #$FF A:FF X:06 Y:05 S:F7 P:NvUBdIzc$F42B:F0 07     BEQ $F434 A:FF X:06 Y:05 S:F7 P:nvUBdIZC$F434:60        RTS (from $F3FE) --------------------------- A:FF X:06 Y:05 S:F7 P:nvUBdIZC which shows that the game is copying data from$B718-$B71D to$010A-$0106 (stored backwards) until it reads a #$FF (monster name end token) or X reaches #$00, and that X was set based on$60A0. Spoiler: As a side note, a little further down, you'll see the game copies the monster name from $00FF,X to$6119,Y, where it will eventually get used by the [name] control code: Code: [Select] $FCE8:AE A0 60 LDX$60A0 = #$0B A:FF X:06 Y:00 S:F9 P:nvUBdIZc$FCEB:BD FF 00  LDA $00FF,X @$010A = #$36 A:FF X:0B Y:00 S:F9 P:nvUBdIzc$FCEE:99 19 61  STA $6119,Y @$6119 = #$25 A:36 X:0B Y:00 S:F9 P:nvUBdIzc$FCF1:C8        INY                                          A:36 X:0B Y:00 S:F9 P:nvUBdIzc $FCF2:CA DEX A:36 X:0B Y:01 S:F9 P:nvUBdIzc$FCF3:D0 F6     BNE $FCEB A:36 X:0A Y:01 S:F9 P:nvUBdIzc ... Searching backwards in the trace log for$60A0, the very first result is this: Code: [Select] $FC92:A9 0B LDA #$0B                                     A:00 X:00 Y:00 S:FA P:nvUBdIZc $FC94:8D A0 60 STA$60A0 = #$01 A:0B X:00 Y:00 S:FA P:nvUBdIzc So$60A0 got its value from A, and A got its value set based on $FC93 (the #$0B part of "LDA #$0B"), which unlike$60A0 comes from ROM i.e. 0x3FCA3. Ta-da! With that as a guide, see if you can find where the maximum length of the second "line" of monster names in the main dialogue box is set (hint: it's #$09 and it's not too far away from where the maximum length of the first "line" is set), and then see if you can track down where the lengths for each of the two lines in the monster menu list get set (hint: same values as the dialogue box lengths, but set in a different area of the code; they'll still be in your trace log, though). Really sorry about this. It's hard not to feel like that guy who was trying to translate DW1 into Spanish. It's frustrating to feel so clueless, but this is uncharted territory for me and I'm determined to pick up SOMETHING new for later games. On another note, the Assembly guide you posted is really handy, thanks! It's nice to know what all those 3 letter terms mean. Yeah, if you're not used to this kind of thing, it can take a while to really sink in. Just keep at it and you'll get the hang of it sooner or later! Title: Re: General NES Hacking Questions Post by: Choppasmith on May 18, 2019, 07:11:35 am Hey! Glad to see your back and that you're okay! I was genuinely worried for a bit there that something bad might've happened that would've taken you out of the picture. While I'm sure I could've found help, you're a cool guy and it would've been a bummer to not be able to finish this while learning how to do it on my own. But anyway... I'm not sure how much further down I can go :P. Go to the directory containing Asar, copy the sample ASM I provided into a new file named test.asm, make an empty file named test.bin, and then open a command prompt in that directory and run "asar -nocheck test.asm test.bin". Works like a charm for me. I wonder if it's a Windows thing, are you on 10? When I try to run that very same command from the command line, it seems to work for a second but just takes me back to the command line with a modified test.bin and trying to run asar again just gives me the usual. Searching backwards in the trace log for$60A0, the very first result is this: Code: [Select] $FC92:A9 0B LDA #$0B                                     A:00 X:00 Y:00 S:FA P:nvUBdIZc $FC94:8D A0 60 STA$60A0 = #$01 A:0B X:00 Y:00 S:FA P:nvUBdIzc So$60A0 got its value from A, and A got its value set based on $FC93 (the #$0B part of "LDA #$0B"), which unlike$60A0 comes from ROM i.e. 0x3FCA3. Ta-da! With that as a guide, see if you can find where the maximum length of the second "line" of monster names in the main dialogue box is set (hint: it's #$09 and it's not too far away from where the maximum length of the first "line" is set), and then see if you can track down where the lengths for each of the two lines in the monster menu list get set (hint: same values as the dialogue box lengths, but set in a different area of the code; they'll still be in your trace log, though). So on one hand I DID find the second line monster value of 9 at 3FCBF, though I'm not sure how you turned FC93 to 3FCA3. I mean yeah you added 30010 but where did THAT come from? It doesn't quite match up with what you were talking about RAM to ROM addresses on the last page. And I made a honest effort, but I can't seem to find what you're talking about for the Monster List window. I DID find the subroutine FCE8 in my trace log and while trying to understand it still makes my eyes go @_@ I can understand enough that there's two sections concerning whether or not the Monster needs that second line printed in the window. And I can see that it loads the value as X as opposed to A in the main dialog window. But can't seem to find anything in my log about the value being stored in 60A0. Title: Re: General NES Hacking Questions Post by: abw on May 18, 2019, 12:57:54 pm Hey! Glad to see your back and that you're okay! I was genuinely worried for a bit there that something bad might've happened that would've taken you out of the picture. While I'm sure I could've found help, you're a cool guy and it would've been a bummer to not be able to finish this while learning how to do it on my own. But anyway... Yeah, every now and then I go offline for a couple of weeks for IRL stuff, though one time it was for an entire year! I wonder if it's a Windows thing, are you on 10? When I try to run that very same command from the command line, it seems to work for a second but just takes me back to the command line with a modified test.bin and trying to run asar again just gives me the usual. I'm actually on Windows 7 (cuz eww 8 and 10), but if you're getting a modified test.bin, then it sounds like Asar is working. Try changing test.asm and see if you get a different test.bin. So on one hand I DID find the second line monster value of 9 at 3FCBF, Nice job! though I'm not sure how you turned FC93 to 3FCA3. I mean yeah you added 30010 but where did THAT come from? It doesn't quite match up with what you were talking about RAM to ROM addresses on the last page. If the Trace Logger included bank number,$FC93 would show up as $0F:$FC93, and $0F *$4000 - $8000 -$01 * $4000 +$FC93 + $10 =$3FCA3 (i.e. <ROM bank number> * <bank size> - <base RAM-to-ROM offset> - <RAM bank number> * <bank size> + <RAM address> + <iNES header size>). Or find $FC93 in the Hex Editor -> right click -> Go Here In ROM File. And I made a honest effort, but I can't seem to find what you're talking about for the Monster List window. I DID find the subroutine FCE8 in my trace log and while trying to understand it still makes my eyes go @_@ Ah,$FCE8's not so bad :P. When you're trying to wrap your head around a block of code, remember that the Debugger and Trace Logger give you two different views of the same thing; sometimes it's easier to understand what's going on when looking at one instead of the other. Here's the basic code you'll see in the Debugger: Code: [Select] 0F:FCE8:AE A0 60 LDX $60A0 0F:FCEB:BD FF 00 LDA$00FF,X 0F:FCEE:99 19 61 STA $6119,Y 0F:FCF1:C8 INY 0F:FCF2:CA DEX 0F:FCF3:D0 F6 BNE$FCEB 0F:FCF5:60      RTS and here's a commented version: Spoiler: Code: [Select] ; copy $60A0 bytes of data from$00FF,X to $6119,Y ; X is used as a read index, Y as a write index ; data gets copied in reverse order ; IN: ; A/X/C = irrelevant ; Y = current write index ; OUT: ; A = last byte copied (but calling code doesn't care) ; X = 0 ; Y = current write index; this is important since the calling code needs to remember the write index from the first segment when dealing with the second segment ; C = unchanged ; control flow target (from$FC9D, $FCBA) 0x03FCF8|$0F:$FCE8:AE A0 60 LDX$60A0 ; initialize the read index to the value of $60A0 ; control flow target (from$FCF3) 0x03FCFB|$0F:$FCEB:BD FF 00 LDA $00FF,X ; read data from$00FF,X 0x03FCFE|$0F:$FCEE:99 19 61 STA $6119,Y ; write data to$6119,Y 0x03FD01|$0F:$FCF1:C8      INY ; increment write index 0x03FD02|$0F:$FCF2:CA      DEX ; decrement read index 0x03FD03|$0F:$FCF3:D0 F6    BNE $FCEB ; if the read index is not 0, loop back to$FCEB 0x03FD05|$0F:$FCF5:60      RTS ; otherwise the read index is 0, so we're done I can understand enough that there's two sections concerning whether or not the Monster needs that second line printed in the window. And I can see that it loads the value as X as opposed to A in the main dialog window. But can't seem to find anything in my log about the value being stored in 60A0. If you keep looking for reads on $B718, you should notice that the game scans through the monster name list a few times while starting a battle; the last one is for the monster list menu (you can also easily isolate this one by starting a trace log just before pressing FIGHT, as the monster menu gets redrawn after that point). At the spot where the breakpoint fires, you'll be in the same block of code as for the main dialogue window, but coming from a different place (the stack will show something like FA,F3,C5,EF,..., which means the last JSR before you got to this code ended at$F3FA [so started at $F3F8], and the JSR before that ended at$EFC5 [$EFC3]). Searching backwards for$60A0 from there should quickly get you to: Code: [Select] $EFA9:A9 0B LDA #$0B                                     A:00 X:18 Y:07 S:F1 P:nvUBdIZc $EFAB:8D A0 60 STA$60A0 = #$00 A:0B X:18 Y:07 S:F1 P:nvUBdIzc and searching forwards will eventually (after a long series of other uses for$60A0) get you the maximum length of the second line too (or if you look in the Debugger, the code for handling the second line of monster names is only a few lines of ASM away from the code for handling the first line). Title: Re: General NES Hacking Questions Post by: Choppasmith on June 13, 2019, 09:43:40 pm Okay, so first of all, sorry for the delay. I mean yeah I work but a big reason is I just got nice CPU upgrade so I've been finally able to play DQXI among other things, and I think the last time I looked at this (about a week ago) my brain was like "Nope, not today..." but the Smash reveal made me go... "yeah I need to get back to this." If anything I'm just eager to get to 3 and finish the NES Erdrick trilogy at the very very least. Anyway. Good news is I not only figured out the monster name menu length (though to your credit, you made it pretty easy in your last post) but I ALSO got the spell length for dialog. (https://i.imgur.com/CC0s7l2.png) (yeah I swapped out Heal for Holy Protection as a quick way to test) I'm not sure if I want to mess with the menu length and keep it abbreviated like I did DW1 (because I know unlike DW1 not only the spell menus are done differently, but there are many more long spell names too). Otherwise that's all taken care of. Bad news it, and I'm so sorry, but the ASM stuff is still stumping me. More, the asar usage than anything else. I'm actually on Windows 7 (cuz eww 8 and 10), but if you're getting a modified test.bin, then it sounds like Asar is working. Try changing test.asm and see if you get a different test.bin. Is the bin file supposed to be viewable in text form? Because opening it up in Notepad+ just gives me junk so I have no idea what to look for. Title: Re: General NES Hacking Questions Post by: abw on June 14, 2019, 09:21:41 am Okay, so first of all, sorry for the delay. I mean yeah I work but a big reason is I just got nice CPU upgrade so I've been finally able to play DQXI among other things, and I think the last time I looked at this (about a week ago) my brain was like "Nope, not today..." but the Smash reveal made me go... "yeah I need to get back to this." If anything I'm just eager to get to 3 and finish the NES Erdrick trilogy at the very very least. Heh, I hear DQXI is a leading cause of delay among DQ NES hackers ;). Anyway. Good news is I not only figured out the monster name menu length (though to your credit, you made it pretty easy in your last post) but I ALSO got the spell length for dialog. Congrats! It sounds like you must be getting close to finishing with this game - what's still left? Bad news it, and I'm so sorry, but the ASM stuff is still stumping me. More, the asar usage than anything else. Is the bin file supposed to be viewable in text form? Because opening it up in Notepad+ just gives me junk so I have no idea what to look for. Just as viewable as any other ROM, i.e. not very :P. Like I said earlier (http://www.romhacking.net/forum/index.php?topic=27053.msg374493#msg374493), that sample ASM file should generate a file with 64 KB of zero bytes followed by the bytes "A9 00 4C 00 80", so open it up in a hex editor, scroll to the very bottom, and if you see those bytes, then it's working. For real world usage, you'd want to adjust the org/base values to the ROM/RAM addresses you want to write to, replace the useless infinite loop ASM I concocted for the sample with whatever code you actually want to insert, and then run it against a real ROM instead of an empty file. Title: Re: General NES Hacking Questions Post by: Choppasmith on June 21, 2019, 09:45:23 pm Heh, I hear DQXI is a leading cause of delay among DQ NES hackers ;). The funny thing about playing XI while working on II is spotting the references. The Puff Puff girl in Gondolia says the same thing as the girl in Lianport/Rippleport word for word from the mobile version. That was a real A-ha moment for me and I'm hoping doing these script ports more people will be able to see that. Quote Congrats! It sounds like you must be getting close to finishing with this game - what's still left? Honestly outside fixing the buggy  plural s and of course the monster name ASM, just need to edit the uncompressed Prologue text and get the new graphics from Chicken Knife's hack I'll be done. I have the new menus ready to go for insertion. Speaking of A-Ha moment, I finally get how asar works. Man I feel dumb. I was expecting some kind of fancy interface like PS2Dis or something where you can just edit the lines of code as you go. Am I right in thinking I could just copy the Pluralization rules from the Wiki into the test.asm (replace the loop code) file but change org to $01C805 (code where it starts to check monster count) and base to$4000 (?) and then just "revise" the code to my liking and then run "asar -nocheck test.asm (DW2 rom)" Is that what you do? Title: Re: General NES Hacking Questions Post by: abw on June 22, 2019, 04:46:25 pm Speaking of A-Ha moment, I finally get how asar works. Man I feel dumb. I was expecting some kind of fancy interface like PS2Dis or something where you can just edit the lines of code as you go. Ah, yeah, it's not that fancy, alas. Quite possibly there are better assembler options out there; Asar is just the one I'm used to and it hasn't yet irritated me enough to go looking for alternatives. Am I right in thinking I could just copy the Pluralization rules from the Wiki into the test.asm (replace the loop code) file but change org to $01C805 (code where it starts to check monster count) and base to$4000 (?) and then just "revise" the code to my liking and then run "asar -nocheck test.asm (DW2 rom)" Is that what you do? Sort of. You'll want to set the base value (RAM address) to $87F5 and then trim out the non-ASM ROM address, RAM address, and assembled code bytes from the wiki, leaving just the opcodes and data bytes (and probably the comments, because why not?). Unless you can manage to keep the byte counts between control flow targets identical to the original code or enjoy manually counting bytes and updating lots of pointers every time you make a change, you will also want to convert the control flow addresses into labels and use those, so it'll look something like this: Code: [Select] LDY$8F ; number of monsters in the current group DEY BEQ done ; if only 1 monster, then no need to pluralize, so we're done DEX ; back up to [end-FA] DEX ; back up to final letter of monster name LDA $60F1,X ; read final letter of monster name CMP #$18 ; "o" BEQ o_handler ; -ngo -> -ngo, -o -> -os CMP #$0F ; "f" BEQ f_handler ; -f -> -ves (not used) CMP #$22 ; "y" BEQ y_handler ; -y -> -ies CMP #$12 ; "i" BEQ i_handler ; -i -> -ies CMP #$1C ; "s" BEQ s_handler ; -rus -> -rii, -s -> -ses CMP #$11 ; "h" BEQ h_handler ; -ch -> -ches, -sh -> -shes, -h -> -hs CMP #$17 ; "n" BEQ n_handler ; -man -> -men, -Man -> -Men, -n -> -ns CMP #$0E ; "e" BEQ e_handler ; -mouse -> -mice, -Mouse -> -Mice, -e -> es CMP #$0D ; "d" ; control flow target (from $883A,$8841, $8863,$8884, $888F,$889D, $88E1) ; append "s" to monster name ; default pluralization if not handled above add_s: INX LDA #$1C ; "s" STA $60F1,X ; append "s" to monster name INX LDA #$FA ; [end-FA] STA $60F1,X ; append [end-FA] to monster name INX ; control flow target (from$87F8, $8843) done: SEC RTS o_handler: ... Keep in mind that other unrelated code starts at$88E4, so make sure not to overwrite that; fortunately bank 7 is mostly empty, so if you need more space, you can JMP $8C13 and keep going from there. Title: Re: General NES Hacking Questions Post by: Choppasmith on June 25, 2019, 01:31:06 pm Okay, so, here it is! NOTE: I'm keeping addresses and bytes here for referential purposes. I'm well aware from your last post that these should be removed for insertion. Code: [Select] LDY$8F ; number of monsters in the current group DEY BEQ done ; if only 1 monster, then no need to pluralize, so we're done DEX ; back up to [end-FA] DEX ; back up to final letter of monster name LDA $60F1,X ; read final letter of monster name CMP #$0F ; "f" BEQ $885E ; -f -> -ves (not used) CMP #$22 ; "y" BEQ $8874 ; -y -> -ies CMP #$12 ; "i" BEQ $8863 ; -i -> -ies CMP #$1C ; "s" BEQ $8835 ; -rus -> -rii, -s -> -ses CMP #$11 ; "h" BEQ $886B ; -ch -> -ches, -sh -> -shes, -h -> -hs CMP #$17 ; "n" BEQ $889C ; -man -> -men, -Man -> -Men, -n -> -ns CMP #$0A ; "a" CMP #$1B ; "r" BEQ d_handler ; Man O' War -> Men O' War ; control flow target (from$883A, $8841,$8863, $8884,$888F, $889D,$88E1) ; append "s" to monster name ; default pluralization if not handled above INX LDA #$1C ; "s" STA$60F1,X ; append "s" to monster name INX LDA #$FA ; [end-FA] STA$60F1,X ; append [end-FA] to monster name INX ; control flow target (from $87F8,$8843) done: SEC RTS ; -s pluralization handler: Cyclops, Gigantes, and Atlas don't change, Magus changes to Magi otherwise add es $8835: LDA$60F0,X ; read second-last letter of monster name $8838: CMP #$0E ; "e" $883A: BEQ$8830 ; if -es, singular=plural $883C: CMP #$0A ; "a" $883F: BEQ$8830 ; if -as, singular=plural $8841: CMP #$19 ; "p" $8843: BEQ$8830 ; if -ps, singular=plural $8845: LDA$60EF,X ; read third-last letter of monster name $8848: CMP #$10 ; "g" $884A: BEQ$8855   ; Shortcut here, if g_s (Magus) replace us with i $884C: BNE$885D ; If not either Cyclops or Atlas (or Magus), add -es ; append "es" to monster name $884E:E8 INX$884F:A9 0E    LDA #$0E ; "e"$8851:9D F1 60 STA $60F1,X ; append "e" to monster name$8854:D0 BE    BNE $8823 ; append "s" to monster name; note that this branch is always taken$8856:  STA $60F0,X ; replace -us with -ii (how do I change this to replace -us to -i?)$8859:  STA $60F1,X$885C:  SEC $885D: RTS ; -f pluralization handler: -f -> -ves (no need to have this, but if there's space, might keep it for possible hacks)$885E:A9 1F    LDA #$1F ; "v"$8860:4C 6C 88 JMP $886C ; replace "f" with "v" then append "es" ; -i pluralization handler: -i -> -ies (same as above)$8863:A9 12    LDA #$12 ; "i" ; unused control flow target (from$8867) $8865:9D F1 60 STA$60F1,X ; replace final letter with "i" $8868:4C 5D 88 JMP$884E ; append "es" ; -h pluralization handler: -ch -> -ches, -sh -> -shes, -h -> -hs (no longer need ch, keeps Mech as Mechs) $886B:BD F0 60 LDA$60F0,X ; read second-last letter of monster name $886E:C9 1C CMP #$1C ; "s" $8870:F0 E0 BEQ$885D ; if -sh, append "es" $8872:D0 A4 BNE$8823 ; else, append "s" ; -y pluralization handler: like -i, except needs exceptions for boy-boys and dragonfry-no change $8874 LDA$60F0,X ; read second-last letter of monster name $8877 CMP #$18 ; "o" $8879 BEQ$8823 ; if -oy, append "s" $887B CMP #$18 ; "r" $887D BEQ$8830       ; if -ry, no change $887F BNE$886C       ; otherwise replace y with -ies $8881 LDA$60F0,X ; read second-last letter of monster name $8884 CMP #$1C ; "s" $8886 BEQ ; if -sa add "e"$8888 BNE $8823 ; if not -sa append "s" ;adding "e"$888A INX $888B LDA #$0E ; "e" $888D STA$60F1,X ; append "e" to monster name $8890 JMP 8829 ; -r pluralization handler: needed for man o' war$8893 LDA $60E9,X ; read ninth from end letter of monster name$8895 CMP #$0A ; "a"$8897 BNE $8823 ; if not "a" append "s"$8899 STA $60E9,X ; replace "a" with "e" so that Man o' War becomes "Men o' War" <-IS THIS RIGHT? ; -n pluralization handler: -man -> -men, -Man -> -Men, -n -> -ns$889C:BD F0 60 LDA $60F0,X ; read second-last letter of monster name$889F:C9 0A    CMP #$0A ; "a"$88A1:D0 9D    BNE $8823 ; if not -an, append "s"$88A3:BD EF 60 LDA $60EF,X ; read third-last letter of monster name$88A6:C9 16    CMP #$16 ; "m"$88A8:F0 04    BEQ $8891 ; -man -> -men$88AA:C9 30    CMP #$30 ; "M"$88AC:D0 92    BNE $8823 ; if not -Man, append "s" ; control flow target (from$888B) $88AE:A9 0E LDA #$0E ; "e" $88B0:9D F0 60 STA$60F0,X ; replace second-last letter of monster name ; control flow target (from $88DF)$88B3:38      SEC $88B4:60 RTS Not neading the -dead or -mouse codes freed up a bunch of space. So much so, I thought I'd just keep some of the extra codes I didn't necessarily need like f-ves or i-ies in case someone wants to use my script port as a base for a hack that needs new monster names. I just had a couple of confused points. I'm not quite sure how the replace text code works so I'm a little stumped at changing the old -us to -ii code to just -us to -i (Magus to Magi) Code: [Select]$8856:  STA $60F0,X ; replace -us with -ii (how do I change this to replace -us to -i?)$8859:  STA $60F1,X$885C:  SEC $885D: RTS and I just wanted to run my Man O' War to Men O' War code by you Code: [Select] ; -r pluralization handler: needed for man o' war$8893 LDA $60E9,X ; read ninth from end letter of monster name$8895 CMP #$0A ; "a"$8897 BNE $8823 ; if not "a" append "s"$8899 STA $60E9,X ; replace "a" with "e" so that Man o' War becomes "Men o' War" <-IS THIS RIGHT? Title: Re: General NES Hacking Questions Post by: abw on June 25, 2019, 08:39:22 pm I just had a couple of confused points. I'm not quite sure how the replace text code works so I'm a little stumped at changing the old -us to -ii code to just -us to -i (Magus to Magi) Code: [Select]$8856:  STA $60F0,X ; replace -us with -ii (how do I change this to replace -us to -i?)$8859:  STA $60F1,X$885C:  SEC $885D: RTS There's nothing magic going on here; try setting an execute breakpoint at$8853, getting into a fight with some Magic Vampirii (or temporarily rename e.g. Slime to Slrus and get into a fight with some of those), and step through the code, watching what happens around $60F1 in the Hex Editor. Does this make it any clearer? Spoiler: Code: [Select] 0x01C863|$07:$8853:A9 12 LDA #$12 ; "i" 0x01C865|$07:$8855:9D F0 60 STA $60F0,X ; replace the second-last letter ("u") with "i" 0x01C868|$07:$8858:9D F1 60 STA$60F1,X ; replace the last letter ("s") with "i" 0x01C86B|$07:$885B:38      SEC ; let calling code know to read a #$FA-terminated string from$60F1 instead of a single byte from A 0x01C86C|$07:$885C:60      RTS and I just wanted to run my Man O' War to Men O' War code by you Code: [Select] ; -r pluralization handler: needed for man o' war $8893 LDA$60E9,X     ; read ninth from end letter of monster name $8895 CMP #$0A ; "a" $8897 BNE$8823       ; if not "a" append "s" $8899 STA$60E9,X     ; replace "a" with "e" so that Man o' War becomes "Men o' War" <-IS THIS RIGHT? Close, but as it's written that code will still end pluralizing Man O' War to Man O' War, since it takes the ninth-last letter and writes it to the ninth-last letter; try doing "LDA #$0E" before the "STA$60E9,X". Once you've finished labelling the control flow targets, try assembling and inserting it! Your code is definitely shorter than the original (and way shorter than mine for Latin!), so this probably won't be an issue, but just in case, keep in mind that all those branches use a 1-byte signed displacement from the end of the operand, which basically means if you need to move farther than -127/+128 bytes, you'll have to use a JMP instead. Assuming that works, it's time to test it out - here's hoping the game doesn't crash on you :D. Title: Re: General NES Hacking Questions Post by: Choppasmith on June 26, 2019, 09:42:21 pm There's nothing magic going on here; try setting an execute breakpoint at $8853, getting into a fight with some Magic Vampirii (or temporarily rename e.g. Slime to Slrus and get into a fight with some of those), and step through the code, watching what happens around$60F1 in the Hex Editor. Does this make it any clearer? Spoiler: Code: [Select] 0x01C863|$07:$8853:A9 12    LDA #$12 ; "i" 0x01C865|$07:$8855:9D F0 60 STA$60F0,X ; replace the second-last letter ("u") with "i" 0x01C868|$07:$8858:9D F1 60 STA $60F1,X ; replace the last letter ("s") with "i" 0x01C86B|$07:$885B:38 SEC ; let calling code know to read a #$FA-terminated string from $60F1 instead of a single byte from A 0x01C86C|$07:$885C:60 RTS Close, but as it's written that code will still end pluralizing Man O' War to Man O' War, since it takes the ninth-last letter and writes it to the ninth-last letter; try doing "LDA #$0E" before the "STA $60E9,X". Once you've finished labelling the control flow targets, try assembling and inserting it! Your code is definitely shorter than the original (and way shorter than mine for Latin!), so this probably won't be an issue, but just in case, keep in mind that all those branches use a 1-byte signed displacement from the end of the operand, which basically means if you need to move farther than -127/+128 bytes, you'll have to use a JMP instead. Assuming that works, it's time to test it out - here's hoping the game doesn't crash on you :D. First of all, really smacking my forehead at the obvious lack of a LDA command for some of these. I think I forgot to copy that line over for the Wiki but also, it's really easy for me to overlook something obvious. :crazy: Not sure what I did, but my first time seemed to work okay aside from a couple of mistakes, but now after checking my Jump/Branch values I seem to have made it worse. A lot of my test names tend to show one or two letters like "Three y appeared" instead of two "Iron Dragonfry appeared" (https://i.imgur.com/7AxRlHp.png) I was able to parse your example asm script above, filling in the byte values appropriately, but I did change stuff like done: and add_s: assuming those were meant to be comments. Did I mess somethign up there? This is my current script as it's being inserted (also yeah I should have known base meant was a RAM address. Here I thought it was some kind of hardware value that needed to be changed to something signifying NES hardware instead of SNES hardware. Live and learn. Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$01C805   ; set the ROM file insertion point to 0x10010 base $87F5 ; set the starting RAM address to$8000 LDY $8F ; number of monsters in the current group DEY BEQ$882C ; if only 1 monster, then no need to pluralize, so we're done DEX ; back up to [end-FA] DEX ; back up to final letter of monster name LDA $60F1,X ; read final letter of monster name CMP #$0F ; "f" BEQ $8856 ; -f -> -ves (not used) CMP #$22 ; "y" BEQ $886C ; -y -> -ies CMP #$12 ; "i" BEQ $885B ; -i -> -ies CMP #$1C ; "s" BEQ $882E ; -rus -> -rii, -s -> -ses CMP #$11 ; "h" BEQ $8863 ; -ch -> -ches, -sh -> -shes, -h -> -hs CMP #$17 ; "n" BEQ $8899 ; -man -> -men, -Man -> -Men, -n -> -ns CMP #$0A ; "a" BEQ $887A ; Madusa -> Madusae CMP #$1B ; "r" BEQ $888C ; Man O' War -> Men O' War ; control flow target (from$883A, $8841,$8863, $8884,$888F, $889D,$88E1) ; append "s" to monster name ; default pluralization if not handled above INX LDA #$1C ; "s" STA$60F1,X ; append "s" to monster name INX LDA #$FA ; [end-FA] STA$60F1,X ; append [end-FA] to monster name INX ; control flow target (from $87F8,$8843) ; done: SEC RTS ; -s pluralization handler: Cyclops, Gigantes, and Atlas don't change, Magus changes to Magi otherwise add es LDA $60F0,X ; read second-last letter of monster name CMP #$0E ; "e" BEQ $882C ; if -es, singular=plural CMP #$0A ; "a" BEQ $882C ; if -as, singular=plural CMP #$19 ; "p" BEQ $882C ; if -ps, singular=plural LDA$60EF,X ; read third-last letter of monster name CMP #$10 ; "g" BEQ$884E   ; Shortcut here, if g_s (Magus) replace us with i BNE $8846 ; If not either Cyclops or Atlas (or Magus), add -es ; append "es" to monster name INX LDA #$0E ; "e" STA $60F1,X ; append "e" to monster name BNE$881F ; append "s" to monster name; note that this branch is always taken ; replace us with i LDA #$12 ; "i" STA$60F0,X ; replace -us with -ii (how do I change this to replace -us to -i?) SEC RTS ; -f pluralization handler: -f -> -ves (no need to have this, but if there's space, might keep it for possible hacks) LDA #$1F ; "v" JMP$885D ; replace "f" with "v" then append "es" ; -i pluralization handler: -i -> -ies (same as above) LDA #$12 ; "i" ; unused control flow target (from$8867) STA $60F1,X ; replace final letter with "i" JMP$8846 ; append "es" ; -h pluralization handler: -ch -> -ches, -sh -> -shes, -h -> -hs (no longer need ch, keeps Mech as Mechs) LDA $60F0,X ; read second-last letter of monster name CMP #$1C ; "s" BEQ $8846 ; if -sh, append "es" BNE$881F ; else, append "s" ; -y pluralization handler: like -i, except needs exceptions for boy-boys and dragonfry-no change LDA $60F0,X ; read second-last letter of monster name CMP #$18 ; "o" BEQ $881F ; if -oy, append "s" CMP #$1B ; "r" BEQ $882C ; if -ry, no change BNE$885D      ; otherwise replace y with -ies LDA $60F0,X ; read second-last letter of monster name CMP #$1C ; "s" BEQ $8883 ; if -sa add "e" BNE$881F       ; if not -sa append "s" INX LDA #$0E ; "e" STA$60F1,X ; append "e" to monster name JMP $8825 ; -r pluralization handler: needed for man o' war LDA$60E9,X     ; read ninth from end letter of monster name CMP #$0A ; "a" BNE$881F       ; if not "a" append "s" LDA #$0E ; "e" STA$60E9,X     ; replace "a" with "e" so that Man o' War becomes "Men o' War" <-IS THIS RIGHT? ; -n pluralization handler: -man -> -men, -Man -> -Men, -n -> -ns LDA $60F0,X ; read second-last letter of monster name CMP #$0A ; "a" BNE $881F ; if not -an, append "s" LDA$60EF,X ; read third-last letter of monster name CMP #$16 ; "m" BEQ$88AB   ; -man -> -men CMP #$30 ; "M" BNE$881F ; if not -Man, append "s" ; control flow target (from $888B) LDA #$0E ; "e" STA $60F0,X ; replace second-last letter of monster name ; control flow target (from$88DF) SEC RTS Title: Re: General NES Hacking Questions Post by: abw on June 27, 2019, 04:56:07 pm First of all, really smacking my forehead at the obvious lack of a LDA command for some of these. I think I forgot to copy that line over for the Wiki but also, it's really easy for me to overlook something obvious.  :crazy: Yeah, you'll probably make all sorts of little mistakes on your first few attempts. It's especially bad if you're like me and have a bad habit of thinking one thing but typing another :P. Not sure what I did, but my first time seemed to work okay aside from a couple of mistakes, but now after checking my Jump/Branch values I seem to have made it worse. A lot of my test names tend to show one or two letters like "Three y appeared" instead of two "Iron Dragonfry appeared" When writing ASM, labels are your friends, they get sad if you exclude them :P. It's basically the same idea as pointers when inserting a script: let the computer handle all the boring tedious address calculations for you; that's what computers are good at. Additionally, if you check the assembled code, branches are probably not getting assembled the way you think they are. E.g. that "BEQ $886C" actually ends up as "F0 6C", i.e. branch ahead$2C bytes, or "BEQ $8873", which happens to be right in the middle of "CMP #$1B", which is almost never a good idea, especially since 1B isn't a valid 6502 opcode. Labels don't have that problem. Try something like this instead (not really tested): Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org $01C805 ; set the ROM file insertion point base$87F5 ; set the starting RAM address LDY $8F ; number of monsters in the current group DEY BEQ done ; if only 1 monster, then no need to pluralize, so we're done DEX ; back up to [end-FA] DEX ; back up to final letter of monster name LDA$60F1,X ; read final letter of monster name CMP #$22 ; "y" BEQ _y ; -y -> -ies CMP #$12 ; "i" BEQ _i ; -i -> -ies CMP #$1C ; "s" BEQ _s ; -rus -> -rii, -s -> -ses CMP #$11 ; "h" BEQ _h ; -ch -> -ches, -sh -> -shes, -h -> -hs CMP #$17 ; "n" BEQ _n ; -man -> -men, -Man -> -Men, -n -> -ns CMP #$0A ; "a" CMP #$1B ; "r" BEQ _r ; Man O' War -> Men O' War ; append "s" to monster name ; default pluralization if not handled above append_s: INX LDA #$1C ; "s" set_final_letter: STA $60F1,X INX LDA #$FA ; [end-FA] STA $60F1,X ; append [end-FA] to monster name INX done: SEC RTS _s: ; -s pluralization handler: Cyclops, Gigantes, and Atlas don't change, Magus changes to Magi otherwise add es LDA$60F0,X ; read second-last letter of monster name CMP #$0E ; "e" BEQ done ; if -es, singular=plural CMP #$0A ; "a" BEQ done ; if -as, singular=plural CMP #$19 ; "p" BEQ done ; if -ps, singular=plural LDA$60EF,X ; read third-last letter of monster name CMP #$10 ; "g" BEQ _us ; Shortcut here, if g_s (Magus) replace us with i ; If not either Cyclops or Atlas (or Magus), add -es append_es: ; append "es" to monster name INX LDA #$0E ; "e" STA $60F1,X ; append "e" to monster name BNE append_s ; append "s" to monster name; note that this branch is always taken _us: ; replace "us" with "i" DEX ; shorten string by 1 letter LDA #$12 ; "i" BNE set_final_letter _i: ; -i pluralization handler: -i -> -ies LDA #$12 ; "i" STA$60F1,X ; replace final letter with "i" BNE append_es ; append "es" _h: ; -h pluralization handler: -ch -> -ches, -sh -> -shes, -h -> -hs (no longer need ch, keeps Mech as Mechs) LDA $60F0,X ; read second-last letter of monster name CMP #$1C ; "s" BEQ append_es ; if -sh, append "es" BNE append_s ; else, append "s" _y: ; -y pluralization handler: like -i, except needs exceptions for boy-boys and dragonfry-no change LDA $60F0,X ; read second-last letter of monster name CMP #$18 ; "o" BEQ append_s ; if -oy, append "s" CMP #$1B ; "r" BEQ done ; if -ry, no change BNE _i ; otherwise replace y with -ies _a: ; -a pluralization handler: needed for madusa -> madusae LDA$60F0,X ; read second-last letter of monster name CMP #$1C ; "s" BEQ append_e ; if -sa add "e" BNE append_s ; if not -sa append "s" append_e: ; adding "e" INX LDA #$0E ; "e" BNE set_final_letter _r: ; -r pluralization handler: needed for man o' war LDA $60E9,X ; read ninth from end letter of monster name CMP #$0A ; "a" BNE append_s ; if not "a" append "s" LDA #$0E ; "e" STA$60E9,X ; replace "a" with "e" so that Man o' War becomes "Men o' War" <-IS THIS RIGHT? BNE done _n: ; -n pluralization handler: -man -> -men, -Man -> -Men, -n -> -ns LDA $60F0,X ; read second-last letter of monster name CMP #$0A ; "a" BNE append_s ; if not -an, append "s" LDA $60EF,X ; read third-last letter of monster name CMP #$16 ; "m" BEQ update_e ; -man -> -men CMP #$30 ; "M" BNE append_s ; if not -Man, append "s" update_e: LDA #$0E ; "e" STA $60F0,X ; replace second-last letter of monster name SEC RTS Title: Re: General NES Hacking Questions Post by: Choppasmith on July 05, 2019, 09:35:04 pm Yeah, you'll probably make all sorts of little mistakes on your first few attempts. It's especially bad if you're like me and have a bad habit of thinking one thing but typing another :P. When writing ASM, labels are your friends, they get sad if you exclude them :P. It's basically the same idea as pointers when inserting a script: let the computer handle all the boring tedious address calculations for you; that's what computers are good at. Additionally, if you check the assembled code, branches are probably not getting assembled the way you think they are. E.g. that "BEQ$886C" actually ends up as "F0 6C", i.e. branch ahead $2C bytes, or "BEQ$8873", which happens to be right in the middle of "CMP #$1B", which is almost never a good idea, especially since 1B isn't a valid 6502 opcode. Labels don't have that problem. Try something like this instead (not really tested): Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$01C805 ; set the ROM file insertion point base $87F5 ; set the starting RAM address LDY$8F ; number of monsters in the current group DEY BEQ done ; if only 1 monster, then no need to pluralize, so we're done DEX ; back up to [end-FA] DEX ; back up to final letter of monster name LDA $60F1,X ; read final letter of monster name CMP #$22 ; "y" BEQ _y ; -y -> -ies CMP #$12 ; "i" BEQ _i ; -i -> -ies CMP #$1C ; "s" BEQ _s ; -rus -> -rii, -s -> -ses CMP #$11 ; "h" BEQ _h ; -ch -> -ches, -sh -> -shes, -h -> -hs CMP #$17 ; "n" BEQ _n ; -man -> -men, -Man -> -Men, -n -> -ns CMP #$0A ; "a" BEQ _a ; Madusa -> Madusae CMP #$1B ; "r" BEQ _r ; Man O' War -> Men O' War ; append "s" to monster name ; default pluralization if not handled above append_s: INX LDA #$1C ; "s" set_final_letter: STA$60F1,X INX LDA #$FA ; [end-FA] STA$60F1,X ; append [end-FA] to monster name INX done: SEC RTS _s: ; -s pluralization handler: Cyclops, Gigantes, and Atlas don't change, Magus changes to Magi otherwise add es LDA $60F0,X ; read second-last letter of monster name CMP #$0E ; "e" BEQ done ; if -es, singular=plural CMP #$0A ; "a" BEQ done ; if -as, singular=plural CMP #$19 ; "p" BEQ done ; if -ps, singular=plural LDA $60EF,X ; read third-last letter of monster name CMP #$10 ; "g" BEQ _us ; Shortcut here, if g_s (Magus) replace us with i ; If not either Cyclops or Atlas (or Magus), add -es append_es: ; append "es" to monster name INX LDA #$0E ; "e" STA$60F1,X ; append "e" to monster name BNE append_s ; append "s" to monster name; note that this branch is always taken _us: ; replace "us" with "i" DEX ; shorten string by 1 letter LDA #$12 ; "i" BNE set_final_letter _i: ; -i pluralization handler: -i -> -ies LDA #$12 ; "i" STA $60F1,X ; replace final letter with "i" BNE append_es ; append "es" _h: ; -h pluralization handler: -ch -> -ches, -sh -> -shes, -h -> -hs (no longer need ch, keeps Mech as Mechs) LDA$60F0,X ; read second-last letter of monster name CMP #$1C ; "s" BEQ append_es ; if -sh, append "es" BNE append_s ; else, append "s" _y: ; -y pluralization handler: like -i, except needs exceptions for boy-boys and dragonfry-no change LDA$60F0,X ; read second-last letter of monster name CMP #$18 ; "o" BEQ append_s ; if -oy, append "s" CMP #$1B ; "r" BEQ done ; if -ry, no change BNE _i ; otherwise replace y with -ies _a: LDA $60F0,X ; read second-last letter of monster name CMP #$1C ; "s" BEQ append_e ; if -sa add "e" BNE append_s ; if not -sa append "s" append_e: INX LDA #$0E ; "e" BNE set_final_letter _r: ; -r pluralization handler: needed for man o' war LDA$60E9,X ; read ninth from end letter of monster name CMP #$0A ; "a" BNE append_s ; if not "a" append "s" LDA #$0E ; "e" STA $60E9,X ; replace "a" with "e" so that Man o' War becomes "Men o' War" <-IS THIS RIGHT? BNE done _n: ; -n pluralization handler: -man -> -men, -Man -> -Men, -n -> -ns LDA$60F0,X ; read second-last letter of monster name CMP #$0A ; "a" BNE append_s ; if not -an, append "s" LDA$60EF,X ; read third-last letter of monster name CMP #$16 ; "m" BEQ update_e ; -man -> -men CMP #$30 ; "M" BNE append_s ; if not -Man, append "s" update_e: LDA #$0E ; "e" STA$60F0,X ; replace second-last letter of monster name SEC RTS So.. yeah. I think when I saw your post earlier in the thread about addresses in ASM I read it as "Oh hey, there are ways to make branches in ASM so you don't have to count bytes. You could, but you don't have to" kinda like Pointers. Looking at the code now, I see If/Then branches don't actually use addresses but some kind of relative jump value (and that comment you made about needing JMP commands over a certain amount of bytes makes more sense now too). Only JMP commands do. Lesson learned. Anyway, the new code works great so far. Just have to test out stuff like Man/Men o' War. Changing tracks though. I was working on the Menu script and instead of dealing with Pointer Tables and updating the Pointers manually like I did with DW1, I thought I'd take a crack at making a Cartographer script so that I could just update and insert as much as I want with no fuss, and I seem to be hung up on the extraction part. It's the #base pointer: that I think is messing me up. Code: [Select] #GAME NAME:      DragonW2.nes #TYPE:         NORMAL #METHOD:      POINTER_RELATIVE #POINTER ENDIAN:   LITTLE #POINTER TABLE START:   $7652 #POINTER TABLE STOP:$76E5 #POINTER SIZE:      $02 #POINTER SPACE:$00 #STRINGS PER POINTER:   01 #ATLAS PTRS:      Yes #BASE POINTER:      $16368 #TABLE: dw2menu.tbl #COMMENTS: No #END BLOCK I tried following the examples (initially I thought, "Oh, Base Pointer is Text Address - Pointer Address, right?") but I can't seem to get it to extract right (cartographer just hangs as it generates a huge block of text that I don't think is anywhere close to what I want) Speaking of menus, I just want to bring up something you mentioned a while back Quote You won't be able to completely copy my code since I cannibalized the free space I created by shortening "ADVENTURE LOG" to "VOLUMEN", but the ASM for handling menu control codes$98 - $9F starts at 0x3ED8A; the original game ran the same code for all of$9A - $9F, so I stole$9B - $9F for the "names in border" code. So thanks to my menu editing I too was able to cut down my usage of the ADVENTURE LOG macro (going with " Log", with a space, and was still able to get what I wanted into the menus with plenty of space to spare. But poking around your Latin menus to see what you did. There's resizing going on obviously, but it looks like you ADDED windows (namely a couple extra battle windows). What's going on there? I mean having the full names in the battle status window would be awesome, but I'd be happy if I could just add the full names to battle menus minus extra spaces. But is the code you mentioned above JUST for the trailing spaces on the window bar or does it cover that too? Title: Re: General NES Hacking Questions Post by: abw on July 06, 2019, 11:19:58 am I think when I saw your post earlier in the thread about addresses in ASM I read it as "Oh hey, there are ways to make branches in ASM so you don't have to count bytes. You could, but you don't have to" kinda like Pointers. Looking at the code now, I see If/Then branches don't actually use addresses but some kind of relative jump value (and that comment you made about needing JMP commands over a certain amount of bytes makes more sense now too). Yeah, all the branch ops use a signed displacement, which means they target a certain number of bytes away from the current PC value (e.g. +5 bytes, -22 bytes, etc.), rather than an absolute address like the jumps do (e.g.$885C). So, yes, you could, but you don't have to, and if you do, then you need to write the addresses differently. I was working on the Menu script and instead of dealing with Pointer Tables and updating the Pointers manually like I did with DW1, I thought I'd take a crack at making a Cartographer script so that I could just update and insert as much as I want with no fuss, and I seem to be hung up on the extraction part. It's the #base pointer: that I think is messing me up. [...] I tried following the examples (initially I thought, "Oh, Base Pointer is Text Address - Pointer Address, right?") but I can't seem to get it to extract right (cartographer just hangs as it generates a huge block of text that I don't think is anywhere close to what I want) BASE POINTER is how much you need to add to the pointer value to get the text ROM address. The pointer value is a RAM address, though. So taking that first pointer at 0x7652 as an example, its value is D6 B6 = $B6D6 and it's in ROM bank 1, so that corresponds to ROM address 0x76E6, and$B6D6 + (-$3FF0) =$76E6, so -$3FF0 is the BASE POINTER value you want. What you're currently getting with that$16368 is 0x21A3E, which is in the middle of the title screen graphics :D. So thanks to my menu editing I too was able to cut down my usage of the ADVENTURE LOG macro (going with " Log", with a space, and was still able to get what I wanted into the menus with plenty of space to spare. But poking around your Latin menus to see what you did. There's resizing going on obviously, but it looks like you ADDED windows (namely a couple extra battle windows). What's going on there? I mean having the full names in the battle status window would be awesome, but I'd be happy if I could just add the full names to battle menus minus extra spaces. But is the code you mentioned above JUST for the trailing spaces on the window bar or does it cover that too? Nope, there aren't any new windows, in battle or otherwise. Which ones look like they're new? I did move some of the windows around both in ROM as a sanity aid (I got tired of scrolling all over the place while editing related menus that were originally spread far apart) and on screen for various reasons. The code I added (https://drive.google.com/open?id=1d5miZn-VBNnmTc7B9RvIFjj2PgN1FNtu) for printing hero names with spaces replaced by top borders prints the full 8 byte names, but if you wanted to only display 4 byte names, you would only have to delete a dozen of so lines of code. Title: Re: General NES Hacking Questions Post by: gamingcat02261991 on July 07, 2019, 11:05:07 pm How do you edit palettes and sprites for SMB2 (or SMUSA for the Japanese)? Title: Re: General NES Hacking Questions Post by: Chicken Knife on July 08, 2019, 03:48:41 pm How do you edit palettes and sprites for SMB2 (or SMUSA for the Japanese)? This thread has mostly consisted of Dragon Quest 1 / 2 technical discussion but considering the thread title your question is perfectly on topic haha. When I started with sprite editing, I used a combination of Tile Layer Pro and YYCHR software. I'd recommend watching some youtube videos that walk you through the basics. There are several good ones. FYI, one detail that took me awhile to figure out is the importance of hitting the plus and minus keys if the sprite graphics don't immediately show up clearly in the editors. Title: Re: General NES Hacking Questions Post by: abw on July 08, 2019, 09:10:11 pm This thread has mostly consisted of Dragon Quest 1 / 2 technical discussion but considering the thread title your question is perfectly on topic haha. Yeah, given that the first 96 posts in this thread are all about Dragon Warrior 1 and 2 (mostly about 2), it might be worth changing the thread title to reflect that :P. Title: Re: General NES Hacking Questions Post by: Chicken Knife on July 08, 2019, 09:32:06 pm Yeah, given that the first 96 posts in this thread are all about Dragon Warrior 1 and 2 (mostly about 2), it might be worth changing the thread title to reflect that :P. How revoltingly...sensible of a suggestion. There we are. PS: Don't be surprised when I change the subject further to Dragon Warrior 1, 2 & 3  ;) Title: Re: General NES Hacking Questions Post by: abw on July 09, 2019, 08:46:19 pm How revoltingly...sensible of a suggestion. There we are. I try to come up with one of those every now and then. :laugh: PS: Don't be surprised when I change the subject further to Dragon Warrior 1, 2 & 3  ;) I await the day! :thumbsup: Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on August 24, 2019, 12:11:24 pm @abw Ok, the day has arrived that I'm seriously trying to get extraction and insertion rolling. I would have loved to have gotten further than step 1 on my own. :P The first challenge I'm facing is the fact no one seems to have posted the start and end addresses of the dialogue pointers for DW3 like was done for the first two games. It's easy enough to see the actual storage of the dialogue based on loading the posted table file into a hex editor but I'm not having much success in my attempts to trace back to the block of pointers based on my formerly useful methods. I have a feeling you're going to tell me that debugging is the best way to determine them and... I need to get better at that. Also, I'd imagine this would have to be broken up into a part 1 and part 2 based on the previous extractions? I think that had to do with the final strings being a different length but I'm not 100 percent clear. **EDIT I just want to clarify, I'm not waiting for you to find the pointer table for me. I'm going to continue to be poking and prodding. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on August 24, 2019, 06:09:02 pm The first challenge I'm facing is the fact no one seems to have posted the start and end addresses of the dialogue pointers for DW3 like was done for the first two games. It's easy enough to see the actual storage of the dialogue based on loading the posted table file into a hex editor but I'm not having much success in my attempts to trace back to the block of pointers based on my formerly useful methods. I have a feeling you're going to tell me that debugging is the best way to determine them and... I need to get better at that. Also, I'd imagine this would have to be broken up into a part 1 and part 2 based on the previous extractions? I think that had to do with the final strings being a different length but I'm not 100 percent clear. Debugging is always a way, but there's nothing wrong with trying an educated guess or two first. You already know from the first two games that they only store a pointer to the first string in a block of 16 strings, so maybe DW3 does the same. If you take the start of the text you can read easily at 0x40010, count ahead 16 strings to 0x40190, convert those to RAM addresses $8000 and$8180, and then search the ROM for 00808081, you should get a hit at 0x28080. Et voilà! DW3's text is approximately twice the size of DW2's, and it's not compressed, so it sprawls over 6 banks and not every bank ends with a full 16 strings, so you'll need to split the extraction into several parts, but other than being a bit dull and repetitive, it shouldn't pose any problems. Hmm, the Text section of the DW3 ROM map on the wiki does look woefully inadequate... let's see if we can pump it up a little ;). Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on August 24, 2019, 07:00:11 pm Debugging is always a way, but there's nothing wrong with trying an educated guess or two first. You already know from the first two games that they only store a pointer to the first string in a block of 16 strings, so maybe DW3 does the same. If you take the start of the text you can read easily at 0x40010, count ahead 16 strings to 0x40190, convert those to RAM addresses $8000 and$8180, and then search the ROM for 00808081, you should get a hit at 0x28080. Et voilà! DW3's text is approximately twice the size of DW2's, and it's not compressed, so it sprawls over 6 banks and not every bank ends with a full 16 strings, so you'll need to split the extraction into several parts, but other than being a bit dull and repetitive, it shouldn't pose any problems. Hmm, the Text section of the DW3 ROM map on the wiki does look woefully inadequate... let's see if we can pump it up a little ;). The problem with my attempts to search for the pointer was getting way too many hits. Combining two addresses being pointed to and then searching is one hell of a great idea. This is what I pay you for! :laugh: Doing some updates to the ROM information sounds like a good idea. I'll make sure to do that. As far as counting the strings manually at the ends of rom banks and setting up separate sections for each, that sounds doable! One more thing I'm foggy on though. I'd say that our script increased in size somewhere between 5-10 percent. Seeing the massive chunk of blank space after the existing text made this seem like not much of a problem. But how big of a problem would it be if the text creeps into another rom bank? Also, to your comment of DW2 and DW3's text size being roughly equivalent, that's actually shocking to hear. DW3 not only has more towns but has a day / night cycle that would seem to almost double the text. The translating / writing process definitely *felt* like it took 3x as long. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on August 26, 2019, 01:52:05 pm Speaking of translating and inserting, I'm ready to start inserting my changes and testing DQ2. Abw, I downloaded abcde v4 and saw your setup and tables for all the various things (seriously WOW). Unless I missed something, when extracting, the atlas.txt didn't add the table changes (easy to add myself) or the JMP commands for jumping to various parts of the ROM (a little trickier). Was abcde SUPPOSED to add those? (I did sort of Frankenstein my whole atlas.txt adding the other parts from the new version (credits, menus, etc) to my original which was just the script plus item and monster names). I'm wondering if I missed something somewhere and should just rip from scratch, or if I just have to add those JMP commands. Thanks. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on September 01, 2019, 01:42:57 pm Combining two addresses being pointed to and then searching is one hell of a great idea. This is what I pay you for! :laugh: *cha-ching* :P. Doing some updates to the ROM information sounds like a good idea. I'll make sure to do that. That was actually a (very) roundabout way of hinting that I had just fleshed out the text section a little bit. Definitely add anything I missed, though! I'd say that our script increased in size somewhere between 5-10 percent. Seeing the massive chunk of blank space after the existing text made this seem like not much of a problem. But how big of a problem would it be if the text creeps into another rom bank? Overflowing any of the ROM banks is a problem, but hopefully not one that will be difficult to overcome. The existing text is uncompressed, so if you're only over by 5-10%, adding a simple compression like DTE (which usually gets you 30-40% compression) should be more than enough. The existing text is also spread over multiple banks and there is a large chunk of free space in the final text bank, so probably there's a tiny data table somewhere that controls which bank gets loaded for each string, and you could modify that to shift the text around between banks or even extend it to use one (or more) of the existing empty banks. Also, to your comment of DW2 and DW3's text size being roughly equivalent, that's actually shocking to hear. DW3 not only has more towns but has a day / night cycle that would seem to almost double the text. The translating / writing process definitely *felt* like it took 3x as long. DW3 definitely has much more text than DW2. It depends exactly what you count and how, but 3x sounds about right. That "twice" was clearly a typo for "thrice" and not at all me just throwing out a number based on some vague recollection from a script size comparison I did months ago ;). Abw, I downloaded abcde v4 and saw your setup and tables for all the various things (seriously WOW). Heh, possibly my setup is overkill, but I like having the non-text data embedded in the script show up all nice and pretty in my script dumps; it saves me from having to go back to look up what the data is for every time I'm editing things, plus I can do all of its editing/inserting as part of the same process for editing/inserting the script. Unless I missed something, when extracting, the atlas.txt didn't add the table changes (easy to add myself) or the JMP commands for jumping to various parts of the ROM (a little trickier). Was abcde SUPPOSED to add those? Yeah, I've been of two minds about this. When you're not making any changes to the text engine, it would be nice to have more of the insert commands generated for you by the extract process, but when you are making engine changes, having insert commands based on the old engine is kind of annoying. Hmm, maybe I'll just add an option for that and let the end user decide what they want. For now, though, abcde does NOT add everything you'll want in your insert script, but usually it only takes a couple of minutes to add the missing parts yourself. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on September 11, 2019, 11:26:39 pm So, I was running into some problems First more of a program usage question abw, when trying to use abcde with all those tables I get Code: [Select] Press any key to continue . . . They're all in the same folder so I'm not sure what I'm missing here. Also, for the asar usage, in your coding suggestions I see an added table file with inputting text with you noting "you can input these manually if you don't want to bother with a table $36,$18,$16,$0E,$FA ;" I thought I'd try this and just entered those bytes on a line and asar said it didn't understand the command. Is it db or something else? Also, just curious is the warnpc: address I see in some asm files is that a way to warn you if your code goes past a certain point? Okay, so even worse news for me is that somehow or another my "base" rom (pre-script insert but had the new dictionary and item/monster name values changed) became corrupt somehow so I had to start over. I changed the dictionary and also from my notes changed the second script bank from 02 to 0C (bank 12) via x03FA92 and I'm getting garbled text (it also freezes once you get past the Prologue when starting a new game) I even cut out the extra stuff from the atlas.txt to focus just on the script, and I have no idea what happened. This still looks correct, right? Quote // Define, load, and activate a TABLE #VAR(Table, TABLE) #ADDTBL("dw2_script_NEW.tbl", Table) #ACTIVETBL(Table) // add this near the top of the insert script: #VAR(pointerNum, COUNTER) // create a COUNTER variable named pointerNum #CREATECTR(pointerNum, 8, 0) // pointerNum is an 8-bit value initialized to 0 #AUTOCMD($17FE7, #WLB(pointerNum, $3FA90)) // update the code that controls which pointer starts the next bank // Jump to start of script #JMP($14010) #HDR($C010) // auto-commands for when DW2 does a mid-string bankswap and resets its read address: #AUTOCMD($17FE7, #HDR($28010)) #AUTOCMD($17FE7, #JMP($30010,$3400F)) Really at a loss here. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on September 12, 2019, 07:35:30 pm First more of a program usage question abw, when trying to use abcde with all those tables I get Code: [Select] They're all in the same folder so I'm not sure what I'm missing here. What command were you running? The Atlas.bat script included in the examples folder does include all the required tables and works for me. It sounds like maybe you didn't include heights.tbl on the command line or inside your Atlas.txt file. Also, for the asar usage, in your coding suggestions I see an added table file with inputting text with you noting "you can input these manually if you don't want to bother with a table $36,$18,$16,$0E,$FA ;" I thought I'd try this and just entered those bytes on a line and asar said it didn't understand the command. Is it db or something else? Also, just curious is the warnpc: address I see in some asm files is that a way to warn you if your code goes past a certain point? Yup, the full line would be something like "db$36,$18,$16,$0E,$FA". I did say that I hadn't tested it :P. The xkas/Asar documentation describes what warnpc is supposed to do, though I vaguely recall having issues at some point with it not actually working. Might be worth giving it a test to confirm its behaviour. Okay, so even worse news for me is that somehow or another my "base" rom (pre-script insert but had the new dictionary and item/monster name values changed) became corrupt somehow so I had to start over. I changed the dictionary and also from my notes changed the second script bank from 02 to 0C (bank 12) via x03FA92 and I'm getting garbled text (it also freezes once you get past the Prologue when starting a new game) And this is why having a fully scripted insert process backed by source file version control is a Good Idea™ :P. For the crash, try modifying 0x3FA94 instead - 0x3FA92 is the +$02 in "BCC +$02", so changing that to "BCC +$0C" would probably have disastrous results. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on September 17, 2019, 12:10:40 am So 1. I was using the old bat file I was using for my initial script inserts. Didn't realize you had to add ALL the tables to the command line. Lesson learned. 2. After a good 3 HOURS of trial and error, I found that it was indeed my dictionary text causing problems. I still don't know what exactly but since I was using the DW2 menu table to edit it, there must've been a random wrong byte getting swapped in somehow. To avoid future problems I just made a separate dictionary table and all is well again. 3. Quote And this is why having a fully scripted insert process backed by source file version control is a Good Idea™ :P. For the crash, try modifying 0x3FA94 instead - 0x3FA92 is the +$02 in "BCC +$02", so changing that to "BCC +$0C" would probably have disastrous results. In my defense, having the same values right next to each other is sure easy to change the wrong one. 4. Now I can start messing around with those battle routines and I'm not having much luck so far. But so far with enemies I'm actually getting stuff like "(Hero name) appears!" for single enemies. Takign your suggestions from other posts, I did manage to combine a couple. Code: [Select] norom org $1150C base$94FC LDA #$52 STA$A8 ; initialize per-group string ID to use in multi-group battles LDA #$00 STA$A7 ; initialize total # of empty groups processed process_group_string: JSR $9EEE ; given an index (in A) into the array of structures at$0663, set $B5-$B6 to the address of the corresponding item inside that structure LDY #$09 LDA ($B5),Y STA $8F ; number of monsters in this group BEQ done_display_string ; 0 monsters => no string to print INC$A8 ; string ID to use for this group LDY #$00 LDA ($B5),Y STA $0161 ; current monster ID LDX #$00 JSR $9CD6 ; write monster name in A (+ monster number within its group in X, if > 0) to$6119 LDA $60D8 ; total number of non-empty enemy groups in the current battle CMP #$01 BEQ display_string ; if there's only 1 group, then use string ID #$0001 (change the text to be appropriate) LDA$A8 ; otherwise use the per-group string ID (also change those texts to be appropriate) display_string: JSR $9CCA ; for A < #$60, display string ID specified by A; for A >= #$60, display string ID specified by A + #$A0 done_display_string: INC $A7 LDA$A7 CMP #$04 BCC process_group_string LDA$60D8 ; total number of non-empty enemy groups in the current battle BNE next_section LDA #$02 ; String ID #$0002: But it wasn't real.[end-FC] JSR $9CCA ; for A < #$60, display string ID specified by A; for A >= #$60, display string ID specified by A + #$A0 LDA #$FD STA$98 ; outcome of last fight? JMP $9685 NOP NOP NOP NOP NOP NOP NOP NOP next_section: Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$00BF00 ; set the ROM file insertion point base $8EF0 ; set the starting RAM address LDA$0161 ; monster ID for the current group CMP #$4E ; bosses have IDs >= #$4E (so does the "Enemies" monster, but that's not a monster ID you can encounter) BCS no_change SEC ; not strictly necessary since we got here by BCS and nothing has changed C RTS LDY $8F ; number of monsters in group DEY ; count from 0 instead of 1 BEQ one ; 0 => only one monster => handle "A" vs "An" LDX #$00    ; read index LDY #$00 ; write index loop: LDA some,X ; Monster Counts text STA$60F1,Y ; start of text variable buffer INY INX CMP #$FA ; [end-FA] BNE loop ; if not end token, keep copying done: SEC ; SEC to trigger read of [end-FA]-terminated string from$60F1, CLC to use A RTS some: db $36,$18,$16,$0E,$FA ;"Some" not using a table here one: ; at this point we know Y = 0 LDA #$24 ; "A" STA $60F1,Y ; start of text variable buffer LDA$6119 ; first letter of monster name CMP #$24 ; "A" BEQ an CMP #$28 ; "E" BEQ an CMP #$2C ; "I" BEQ an CMP #$32 ; "O" BEQ an CMP #$38 ; "U" BNE no_change an: LDA #$17 ; "n" INY STA $60F1,Y ; start of text variable buffer no_change: LDA #$FA ; [end-FA] STA $60F1,Y ; start of text variable buffer BNE done Also I meant to ask you if this "s" fix looked right to you Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$01C7E8 ; set the ROM file insertion point base $B7D8 ; set the starting RAM address ; data -> code ; if$8F-$90 == #$0001, print "s" + [end-FA] to $60F1 and SEC, else print [end-FA] and CLC ; indirect control flow target ; from$02:$BE37 via$8006 LDA $90 BNE s ; if$90 > 0, add "s" LDY $8F DEY BEQ end ; if$90 == 0 and $8F - 1 == 0 (i.e.$8F == 1), do not add "s" s: LDA #$1C ; "s" fix: STA$60F1 LDA #$FA ; [end-FA] STA$60F2 SEC RTS end: LDA #$FA ; [end-FA] BNE fix As I recall it's those last 2 bytes that needed fixing and you suggested a quick branch as an easy fix that would fit into the existing space. I just wanted to make sure I have this right. But moving priorities a bit. I was looking at your Latin translation so I could apply your border names code. I extracted the Latin script with Cartographer and strangely noticed that with mine, the original, the pointer and strings count up normally from 0 while yours jumps straight to 30 after 0 and seems to combine a bunch of those strings together. Did you maybe shift some pointers around or is it just Cartographer being weird? Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on September 17, 2019, 09:00:11 pm So 1. I was using the old bat file I was using for my initial script inserts. Didn't realize you had to add ALL the tables to the command line. Lesson learned. You have to tell abcde where your table files are somehow; you can do that on the command line or inside the command files, but it's not going to go around opening random files and trying to use them as tables. A table's ID and file name can be completely different, so guessing at the file name based on the ID doesn't really work (and in v0.0.5 I re-added a pre-release idea where you can have multiple tables inside the same file, so it really doesn't work there). 2. After a good 3 HOURS of trial and error, I found that it was indeed my dictionary text causing problems. I still don't know what exactly but since I was using the DW2 menu table to edit it, there must've been a random wrong byte getting swapped in somehow. To avoid future problems I just made a separate dictionary table and all is well again. The menus definitely use a different table - it's very similar to the table used for most other things, but not quite the same. Spaces in particular can be$81 in the menus, but putting a $81 in the dictionary might result in weirdness happening. 3. In my defense, having the same values right next to each other is sure easy to change the wrong one. Yup, there are plenty of pitfalls in this business - I'm just saying that having a scripted insert process is one way of avoiding some of them :P. 4. Also I meant to ask you if this "s" fix looked right to you Yeah, I think that was the smallest modification that would fix the F2 control code. For the 0xBF00 code, you can shuffle things around a little bit to save a couple of bytes: Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$00BF00 ; set the ROM file insertion point base $8EF0 ; set the starting RAM address LDY$8F ; number of monsters in group DEY ; count from 0 instead of 1 BEQ one ; 0 => only one monster => handle unique monsters and "A" vs "An" LDX #$00 ; read index LDY #$00 ; write index loop: LDA some,X ; Monster Counts text STA $60F1,Y ; start of text variable buffer INY INX CMP #$FA ; [end-FA] BNE loop ; if not end token, keep copying done: SEC ; SEC to trigger read of [end-FA]-terminated string from $60F1, CLC to use A RTS some: db$36,$18,$16,$0E,$FA ;"Some" not using a table here one: ; at this point we know Y = 0 LDA $0161 ; monster ID for the current group CMP #$4E ; bosses have IDs >= #$4E (so does the "Enemies" monster, but that's not a monster ID you can encounter) BCS no_change LDA #$24 ; "A" STA $60F1,Y ; start of text variable buffer LDA$6119 ; first letter of monster name CMP #$24 ; "A" BEQ an CMP #$28 ; "E" BEQ an CMP #$2C ; "I" BEQ an CMP #$32 ; "O" BEQ an CMP #$38 ; "U" BNE no_change an: LDA #$17 ; "n" INY STA $60F1,Y ; start of text variable buffer no_change: LDA #$FA ; [end-FA] STA $60F1,Y ; start of text variable buffer BNE done I was looking at your Latin translation so I could apply your border names code. I extracted the Latin script with Cartographer and strangely noticed that with mine, the original, the pointer and strings count up normally from 0 while yours jumps straight to 30 after 0 and seems to combine a bunch of those strings together. Did you maybe shift some pointers around or is it just Cartographer being weird? I think I mentioned it earlier, but I did reorder the menus to make them easier for me to work with (e.g. storing similar menus side by side), so what you're seeing sounds right. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on September 20, 2019, 12:08:37 am @abw, After much dragging of feet with DQ3, I finally got down to figuring out the start and end addresses of the pointers related to each of the six rom banks, along with segregating the final pointer for each bank and counting the number of strings for it. Something unexpected came up where there is some text data appearing at the end of the bank$15 text that doesn't have an EF end token following it. It starts at 0x05616F and ends at 0x056197. I'm thinking this has a different end token, EE? Should I count that text as an additional string in my cartographer file? As I go through the initial text in bank $10--which has a ton of in game command related text, those end tokens would seem to be EE as well. I suppose I would just need my table file to tell abcde that both EE and EF are end tokens, right? Now the main thing I need to do before extracting is to make sure my table file is optimal. I'm also a little foggy on the concept of the base pointer. Would I use a base pointer of$10 for all the segments of the DQ3 script or would I need to make any adjustments to that? Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on September 21, 2019, 11:09:20 am Something unexpected came up where there is some text data appearing at the end of the bank $15 text that doesn't have an EF end token following it. It starts at 0x05616F and ends at 0x056197. I'm thinking this has a different end token, EE? Should I count that text as an additional string in my cartographer file? As I go through the initial text in bank$10--which has a ton of in game command related text, those end tokens would seem to be EE as well. I suppose I would just need my table file to tell abcde that both EE and EF are end tokens, right? It looks like I have all of EE, EF, FE, and FF listed as end tokens; FE and FF get used outside of the main script (spell/monster/item names etc.). For the main script, it's the same basic idea as in DW2 - EF is a full string end token and EE is a sub-string end token for when the game needs to chop up a longer string into smaller sections, generally to supply different values for variable control codes that get used multiple times in the full string. So yes, mark them all as end tokens! I'm also a little foggy on the concept of the base pointer. Would I use a base pointer of $10 for all the segments of the DQ3 script or would I need to make any adjustments to that? BASE POINTER is just how much you need to add to the pointer value (a RAM address) in order to get to the string in the ROM file (including any header). So if your pointer at 0x28080 has a value of$8000 and the corresponding string starts at 0x40010, you need a BASE POINTER value of 0x40010 - $8000 = 0x38010. Later on, when your pointer at 0x280B4 also has a value of$8000 but the corresponding string starts at 0x44010 instead, you need to update the BASE POINTER value to 0x44010 - $8000 = 0x3C010. Typically, updating the BASE POINTER value goes hand-in-hand with extracting data from a new ROM bank, but games are free to do whatever kind of craziness their programmers came up with, so that's not a hard and fast rule. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on September 23, 2019, 11:27:11 pm :woot!: :woot!: :woot!: Just got the main script fully dumped--All 11 sections of it. (11 and not 12 because bank$11's final pointer conveniently addressed 16 strings. I then went through a rather pleasurable exercise of tweaking my table file's opcode representation to present the grammar in as natural a style as possible for writing purposes. As I think about this, I should probably get the spell / item / monster lists included now as part of the extraction as to make their insertion easier down the road. I'll start working on that promptly, and will also continue to scour the document for any unidentified opcodes. It seems like it's easiest to fix them now in the dump stage of the process. Once I get those items done, I suppose I'll be adding in the mostly finished script line by line. Since the compression you mentioned before is totally new territory for me, should I be doing anything differently with my formatting or simply prepare the insertion script the same way as I did for DQ1 and 2? After that I will be reaching out to you for help with adding code for the compression. Also, this is not a concern right now, but I believe I'll be setting up a separate boundary for each $4000 rom bank in the atlas file, correct? September 24, 2019, 08:05:48 am - (Auto Merged - Double Posts are not allowed before 7 days.) Ok, another thing. The game seems to have an opcode for his/her:$B0. Since there is a ton of misgendering in the original script when you pick a female hero (getting called son, boy, etc) we went for a more gender neutral approach (child, etc). I haven't yet ventured into the realm of creating new opcodes but I assume the idea of making one like son/daughter or he/she would pose quite a challenge. But again, it's not something I'm too worried about since we are relatively happy with our current gender neutral language. September 24, 2019, 09:35:37 pm - (Auto Merged - Double Posts are not allowed before 7 days.) **UPDATE It would appear that $B6 is an opcode for son/daughter. It's amazing that these things exist in the game but the game doesn't use them 9/10 of the time :laugh: Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on September 24, 2019, 09:42:44 pm :woot!: :woot!: :woot!: Just got the main script fully dumped--All 11 sections of it. (11 and not 12 because bank$11's final pointer conveniently addressed 16 strings. Congratulations! Since the compression you mentioned before is totally new territory for me, should I be doing anything differently with my formatting or simply prepare the insertion script the same way as I did for DQ1 and 2? After that I will be reaching out to you for help with adding code for the compression. Also, this is not a concern right now, but I believe I'll be setting up a separate boundary for each $4000 rom bank in the atlas file, correct? No, the insert script formatting is pretty much the same either way. Bank boundaries will take a little extra work, but not too much; basically I just set up per-bank insertion ranges and then update the ROM -> RAM address calculation offset, so something like: Quote #JMP($40010, $43FE7) #HDR($38010) // bank $10 script here #JMP($44010, $47FE7) #HDR($3C010) // bank $11 script here // etc. As for fitting your larger script in, try to cram as much as you can into each bank and then see if the rest will fit in all the extra space in bank$15. If it does, then great; it looks like the code for controlling which bank to load for a given string starts at 0x03AE9F, so updating a couple of numbers there should be enough to make things work. If not, then you get to decide whether you want to add compression or whether you want to use up some of the other giant empty areas of space. I haven't yet ventured into the realm of creating new opcodes but I assume the idea of making one like son/daughter or he/she would pose quite a challenge. But again, it's not something I'm too worried about since we are relatively happy with our current gender neutral language. You mean like the B6=[son/daughter] or B2=[he/she] control codes that already exist? :D If you're happy with a gender-neutral script, that's fine, but the original game does come equipped with several control codes that display different text based on the hero's gender, so you have the option of using gender-specific language too. For control codes, here's what I had (I think the A* ones were not used in the main script, but you can confirm that): Quote B0=[his/her] B1=[himself/herself] B2=[he/she] B3=[him/her] B5=[he's/she's] B6=[son/daughter] B7=[(s)-B7] B8=[check if orb placed] A0=[y/ies] A1=[an/en] A2=[ol/lls?] A3=[i/ls?] A4=[(es)-A4] A5=[(s)-A5] A6=[a/e] A7=[no plural] C0=[(s)-C0] EB=[line]\n ED=[sacrificial merchant's name] /EE=[end-EE]\n\n /EF=[end-EF]\n\n F0=[class] F2=[letter] F3=[spell] F4=[item] F5=[name] F7=[number] F8=[number] F9=[hero] FD=[wait] /FE=[end-FE]\n\n /FF=[end-FF]\n\n Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on September 24, 2019, 10:10:18 pm Just like what has frequently happend, I figured out that son/daughter is an opcode right before you told me. In fact I had cracked every single one on your list except for the A sequence which, yes, wasn't in the script. FYI you may have a problem with assigning both $F7 and$F8 the same designation: [number]. I used [value] for $F8. I seem to recall you saying that you weren't exactly planning to do anything with DQ3. I'm very amused that you seem to have done all this charting out ahead of me simply for fun. :laugh: I've decided to just leave the spell/monster/item lists for the insertion, which is what I did last time for DQ2 according to my notes. That means it's time to start porting in the script. This will be lengthy but shouldn't be much of a headache. After that I'll start playing around with the bank loading code you pointed out. We'll almost certainly be discussing that further but I'll give it a crack. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on September 24, 2019, 11:45:13 pm Just like what has frequently happend, I figured out that son/daughter is an opcode right before you told me. In fact I had cracked every single one on your list except for the A sequence which, yes, wasn't in the script. See? You don't need me for this stuff at all anymore ;). FYI you may have a problem with assigning both$F7 and $F8 the same designation: [number]. I used [value] for$F8. Hmm, yup, good catch. I guess that must have been the point where I got sidetracked, since I had the other duplicate texts separated. Based on its usage in the script, F7 must be used for cardinal numbers (One, Two, Three, ...) and F8 for digits (1, 2, 3, ...). I seem to recall you saying that you weren't exactly planning to do anything with DQ3. I'm very amused that you seem to have done all this charting out ahead of me simply for fun.  :laugh: Don't you know by now that you can't believe everything you read on the internet? :P Translating just 1 and 2 feels wrong - either finish the Erdrick trilogy or the NES quadrilogy! But I got distracted by other things and didn't get much further than an initial script dump + replay (for refreshing my memory, checking some control codes I wasn't sure about, and generating a decent CDL file to use for disassembling). I've decided to just leave the spell/monster/item lists for the insertion, which is what I did last time for DQ2 according to my notes. That means it's time to start porting in the script. This will be lengthy but shouldn't be much of a headache. After that I'll start playing around with the bank loading code you pointed out. We'll almost certainly be discussing that further but I'll give it a crack. Sounds like a plan to me! Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: tvtoon on September 28, 2019, 10:03:50 pm So I finally translated the biggest RAM mapping work I have done for these games, the DQ3 GBC version. I have been stocking this here for more than half a decade, always forget to post this massive stuff but when I saw this topic, flashes happened! I was going to do the NES and SNES versions too, but time is short plus it shares many traits with them. The work is now available at Data Crystal, enjoy! (https://datacrystal.romhacking.net/wiki/Dragon_Quest_III_%28Game_Boy_Color%29:RAM_map) Off: it is about time to update those templates! Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Grimoire LD on September 29, 2019, 08:29:20 am Spectacular! I was shocked at how little information there was on this legendary game and that about satisfied my curiosity for the most part (for most matters outside of battle). Great work! Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on October 07, 2019, 10:57:07 pm So I finally translated the biggest RAM mapping work I have done for these games, the DQ3 GBC version. I have been stocking this here for more than half a decade, always forget to post this massive stuff but when I saw this topic, flashes happened! I was going to do the NES and SNES versions too, but time is short plus it shares many traits with them. The work is now available at Data Crystal, enjoy! (https://datacrystal.romhacking.net/wiki/Dragon_Quest_III_%28Game_Boy_Color%29:RAM_map) Off: it is about time to update those templates! Great work! We may find some value in this for the NES projects and it certainly looks like a nice win for Dragon Quest hacking community in general. @abw nejimakipiyo and I are quite far along in getting the Atlas file ready for DQ3 but we ran into a roadblock with miscellaneous text. We are having a difficult time verifying it against the Japanese. In some cases, it is dependent on rare items; in other cases my partner would have to do another playthrough; in yet other cases we have no idea what the pieces of text connect to in the game. Therefore we decided that we would need to do an extraction of the DQ3 Japanese rom's text. We spent tonight building a kana table file and then were going to try to locate the beginning of the script along with relevant pointers. The first problem we hit was that I can't get Windhex32 to display the kana. I spent a fair amount of time investigating forums and some people seem to indicate that Windhex32EX is the version I need but that doesn't seem to be available anywhere. Anyway, I moved on to an idea I caught from a forum to use romaji instead of kana in the table file. It gets a little dicey with hiragana vs katakana vs small hiragana all translating to the same combinations of letters but we worked out a .tbl that seemed sensible. Once I loaded that, I didn't see any great swaths of text that I was expecting. The text seems to be limited to a couple short segments, and with nejimakipiyo looking over them, they appeared to be related to items. Now I'm getting DQ2 flashbacks of the compressed script, something that wouldn't surprise me considering the infamous storage issues they had with Japanese DQ3 resulting in a cut title screen and fanfare. So to boil that all down to a couple questions: 1, any thoughts about my kana viewing issues in a hexeditor? 2, do you think my suspicions of compression are correct? And 3, if compression, could the solution be relatively simple like the bit based table file we created for DQ2? Going down this kana extraction path has created some real headaches but I think it's pretty important. I wouldn't feel great about knowing that several English lines were left alone and never vetted. I'm not exactly the "good enough is good enough" type, fortunately and unfortunately. abw, thank you as always... Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on October 09, 2019, 08:41:44 pm The work is now available at Data Crystal, enjoy! (https://datacrystal.romhacking.net/wiki/Dragon_Quest_III_%28Game_Boy_Color%29:RAM_map) Great work! We may find some value in this for the NES projects and it certainly looks like a nice win for Dragon Quest hacking community in general. +1 :cookie:! It'll be interesting to see how well the GBC RAM map matches up to the NES RAM map. Therefore we decided that we would need to do an extraction of the DQ3 Japanese rom's text. Wait, if you didn't have a script dump of either the English or Japanese text, what were you translating from? 1, any thoughts about my kana viewing issues in a hexeditor? Saving your table file encoded in Shift-JIS, loading it in WindHex, and then enabling the "View Text Data As Unicode" menu option does the trick for me. If you're going with romaji and want/need to differentiate between the kana systems, one thing I've done in the past is to use e.g. lowercase for hiragana and uppercase for katakana. 2, do you think my suspicions of compression are correct? And 3, if compression, could the solution be relatively simple like the bit based table file we created for DQ2? Yes, DQ3's text encoding appears to be quite similar to DQ4's from a structural point of view. It's a 6-bit encoding with hiragana <=> katakana switches on a $3C (a.k.a. %111100) and dictionary switches on$3D/3E/3F. The major obvious difference between the two encodings is the dictionary contents and some of the kana being different/reordered. For table file building, you can use the DQ4 example that ships with abcde as a basis; DQ3's 6-bit -> 8-bit hiragana lookup table starts at 0x3BB8A, its katakana lookup table starts at 0x3BBC6 (these two mostly follow the order of tiles in the PPU, but not exactly), and the 6 dictionary pointers are at 0x3BC74, pointing to $FE-terminated dictionary entries in ROM bank$10 (i.e. starting at 0x28B4B). Once you've got that sorted out, it looks like the main script pointer table starts at 0x3BC02. In other news, I did a little bit of poking around the text engine code for DW3, and adding in DTE was pretty easy; the $80 -$AF byte range was apparently unused and had its own little code path, so I stole that for the new DTE entries, and $30 DTE entries is enough to compress the original script by over 27%, so if you're only 5% - 10% larger than the original, you should have plenty of room to spare. Assuming Choppasmith is eventually going to want to insert a script that's somewhere around 100% larger than the original, I also looked at the code that sets up the bank and pointer for a given string ID, and rewrote it to make using extra banks easy; in combination with the DTE compression, using 3 of the existing empty ROM banks should be enough space to hold double the original text. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on October 10, 2019, 03:32:16 pm Quote Wait, if you didn't have a script dump of either the English or Japanese text, what were you translating from? Well, I said above that I had an English dump so we are definitely using that. :P It probably would have been more efficient for us to crack our heads getting the Japanese dump at the beginning of the process, but instead we obtained *most* of the Japanese script through various resources online and a Japanese playthrough. I have a bad habit of choosing the longer and less technically difficult path, which is quite foolish when here I am struggling with the technicalities of a Japanese extraction regardless. Quote Saving your table file encoded in Shift-JIS, loading it in WindHex, and then enabling the changing the settings to "View Text Data As Unicode" menu option does the trick for me. If you're going with romaji and want/need to differentiate between the kana systems, one thing I've done in the past is to use e.g. lowercase for hiragana and uppercase for katakana. I ended up downloading the Notepad++ software after investigating Shift-JIS and kind of fell in love with it in general. As far as viewing kana in WindHex, I'm stuck on your instruction of "View Text Data As Unicode". I do see the option of "View Text Data As Japanese" under options, but I've scoured the tabs at the top several times and can't find the Unicode option. Quote Yes, DQ3's text encoding appears to be quite similar to DQ4's from a structural point of view. It's a 6-bit encoding with hiragana <=> katakana switches on a$3C (a.k.a. %111100) and dictionary switches on $3D/3E/3F. Ok, the concepts of 3 table files and switches are making my head hurt. This reminds me of the numerous table files used for fixing the SHILD issue in DW2 but I'd be lying if I said I actually understood that incredible web of files you sent me. Now I'll try to comprehend this (since 3 is better than 20). So if I understand correctly, every time the game alternates between displaying a word in hiragana vs katakana that a switch byte--$3C appears and causes the game to pull from a different character table. Does this $3C switch appear in the code that tells the game to display the text or does it appear in the text itself? I assume that my Cartographer instructions would have to load all the table files and then tell abcde to perform the same switches between tables based on the presence of that byte. If you could point out the instructions that do that in your DQ4 cartographer doc that would be helpful. Quote DQ3's 6-bit -> 8-bit hiragana lookup table starts at 0x3BB8A, its katakana lookup table starts at 0x3BBC6 (these two mostly follow the order of tiles in the PPU, but not exactly), and the 6 dictionary pointers are at 0x3BC74, pointing to$FE-terminated dictionary entries in ROM bank $10 (i.e. starting at 0x28B4B). Once you've got that sorted out, it looks like the main script pointer table starts at 0x3BC02. This is another very confusing element here. In fact, I'm so confused I can hardly even articulate an appropriate question. I did see in the table files that the characters with diacritics used 8 bits instead of 6. Wouldn't they just be pairs of bytes at this point and could be saved in the table file as such? And therefore wouldn't they essentially be uncompressed and I could rely on the byte information showing in a PPU viewer? Quote so if you're only 5% - 10% larger than the original, you should have plenty of room to spare. During my second big round of editing as I've been adding our script lines into the insertion file one at a time, I've found several opportunities to reduce redundancies and make for punchier language. Nothing was compromised, and in fact I strongly prefer dense, to the point writing (in spite of what my RHDN forum activity would probably indicate.) That all said, it would seem likely that I may not have a problem with text space. But we shall see. Dealing with that problem can't be any more daunting than these obstacles around extracting DQ3's Japanese script. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: filler on October 10, 2019, 06:57:30 pm As far as viewing kana in WindHex, I'm stuck on your instruction of "View Text Data As Unicode". I do see the option of "View Text Data As Japanese" under options, but I've scoured the tabs at the top several times and can't find the Unicode option. Those are both the same command, essentially telling WindHex to display bytes that appear in a table file as their corresponding Japanese characters. The reason for the change in wording is likely because "View Text as Unicode" was not accurate since it reads table files in S-JIS format and that setting simply renders the characters in Japanese. It's more accurate to say "View Text Data as Japanese" and Bongo changed the wording in subsequent version(s). Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on October 10, 2019, 09:58:39 pm Those are both the same command, essentially telling WindHex to display bytes that appear in a table file as their corresponding Japanese characters. The reason for the change in wording is likely because "View Text as Unicode" was not accurate since it reads table files in S-JIS format and that setting simply renders the characters in Japanese. It's more accurate to say "View Text Data as Japanese" and Bongo changed the wording in subsequent version(s). Thanks for the clarification. I did experiment with that and tried it again just now. I loaded the Japanese Dragon Quest 2 rom, loaded a hiragana table file that has been converted to SHIFT-JIS, and changed to the mode of View Text Data as Japanese. It showed basically all the data as kanji characters, but I didn't see any of the hiragana. I used that same table file several months ago to do a text dump of the kana and it came out more or less correctly. So that all still doesn't solve the issue. hmm.. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on October 10, 2019, 10:19:38 pm I have a bad habit of choosing the longer and less technically difficult path, which is quite foolish when here I am struggling with the technicalities of a Japanese extraction regardless. Sometimes it can be very tricky indeed to tell whether the path that looks easy will actually end up being any faster than the path that looks difficult. That's a pain I'm sure most of us here can sympathize with ;). Ok, the concepts of 3 table files and switches are making my head hurt. Alright, how about an example? If you talk to the old man standing by the pool in the lower left corner of Aliahan castle at the start of a new game, he says: Quote とうぞくバコタの つくった カギは かんたんなドアを すべて あけたそうじゃ That text starts partway through the byte at 0x2D3D2: E7 F8 BC DD 01 8A F1 8D 91 1E E3 FA F0 3D C4 F1 9D 85 B4 FB 54 F3 73 00 F2 CE 8C DF D3 D2 D8 02 0F FA 5E 7E Which in binary is: 11100111 11111000 10111100 11011101 00000001 10001010 11110001 10001101 10010001 00011110 11100011 11111010 11110000 00111101 11000100 11110001 10011101 10000101 10110100 11111011 01010100 11110011 01110011 00000000 11110010 11001110 10001100 11011111 11010011 11010010 11011000 00000010 00001111 11111010 01011110 01111110 More specifically, the old man's text starts at the second-last bit of 0x2D3D2, so if we ignore the first 6 bits (which are the end token for the previous string), we get 11 11111000 10111100 11011101 00000001 10001010 11110001 10001101 10010001 00011110 11100011 11111010 11110000 00111101 11000100 11110001 10011101 10000101 10110100 11111011 01010100 11110011 01110011 00000000 11110010 11001110 10001100 11011111 11010011 11010010 11011000 00000010 00001111 11111010 01011110 01111110 DQ4 uses 6-bit tokens instead of 8-bit, so considering that string of bits in groups of 6 gives us: 111111 100010 111100 110111 010000 000110 001010 111100 011000 110110 010001 000111 101110 001111 111010 111100 000011 110111 000100 111100 011001 110110 000101 101101 001111 101101 010100 111100 110111 001100 000000 111100 101100 111010 001100 110111 111101 001111 010010 110110 000000 001000 001111 111110 100101 111001 111110 which tokenizes as: Table File Token Text/Effect (hiragana) 111111 [switch to dictionary$3F for 1 token] (dict_$3F) 100010 とう[add dakuten to next token]そく (hiragana) 111100 [switch to katakana] (katakana) 110111 [add dakuten to next token] (katakana) 010000 ハ (katakana) 000110 コ (katakana) 001010 タ (katakana) 111100 [switch to hiragana] (hiragana) 011000 の (hiragana) 110110 (hiragana) 010001 つ (hiragana) 000111 く (hiragana) 101110 っ (hiragana) 001111 た (hiragana) 111010 [line] (hiragana) 111100 [switch to katakana] (katakana) 000011 カ (katakana) 110111 [add dakuten to next token] (katakana) 000100 キ (katakana) 111100 [switch to hiragana] (hiragana) 011001 は (hiragana) 110110 (hiragana) 000101 か (hiragana) 101101 ん (hiragana) 001111 た (hiragana) 101101 ん (hiragana) 010100 な (hiragana) 111100 [switch to katakana] (katakana) 110111 [add dakuten to next token] (katakana) 001100 ト (katakana) 000000 ア (katakana) 111100 [switch to hiragana] (hiragana) 101100 を (hiragana) 111010 [line] (hiragana) 001100 す (hiragana) 110111 [add dakuten to next token] (hiragana) 111101 [switch to dictionary$3D for 1 token] (dict_$3D) 001111F へ (hiragana) 010010 て (hiragana) 110110 (hiragana) 000000 あ (hiragana) 001000 け (hiragana) 001111 た (hiragana) 111110 [switch to dictionary$3E for 1 token] (dict_$3E) 100101 そう[add dakuten to next token]しゃ (hiragana) 111001 [end] (those extra 6 bits at the end are the start of the next string) So, every time the game reads a$3C (%111100), it toggles between hiragana and katakana and stays in the new table until the next time it reads a switch token; when it reads a $3D,$3E, or $3F (%111101/%111110/%111111), it switches to the corresponding dictionary for 1 token (or at least that's the high level effect; the actual ASM divides the bits up differently and treats the dictionary as 6 parts with 32 entries each rather than 3 parts with 64 entries each). In the sample DQ4 table files, DQ4's table switches happen on the Quote !%111100=,<@katakana>:%111100 !%1111=,<@dictionary>:1 lines. You'll notice that I split the 12 bits for the dictionary switch + dictionary entry into 4 bits for switching and 8 bits (1 byte) for the entry; keeping them as 6 bits each will give you the exact same effect, so which form you prefer is entirely up to you. This is another very confusing element here. In fact, I'm so confused I can hardly even articulate an appropriate question. I did see in the table files that the characters with diacritics used 8 bits instead of 6. Wouldn't they just be pairs of bytes at this point and could be saved in the table file as such? And therefore wouldn't they essentially be uncompressed and I could rely on the byte information showing in a PPU viewer? DQ3's dictionary entries are a series of 8-bit values, so they get to use the full 8-bit range, but the hiragana and katakana entries are only 6-bit values and they get translated into 8-bit values via a pair of lookup tables, one for hiragana (e.g. hiragana %000000 translates to$0B, which is あ in the PPU viewer) and one for katakana (e.g. katakana %000000 translates to $3D, which is ア in the PPU viewer). You should be able to read the dictionary entries and 6-to-8-bit lookup tables with a byte-based table file, but the script itself is still encoded as a series of 6-bit tokens, so you won't be able to see that as easily. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on October 13, 2019, 01:40:17 am Assuming Choppasmith is eventually going to want to insert a script that's somewhere around 100% larger than the original, I also looked at the code that sets up the bank and pointer for a given string ID, and rewrote it to make using extra banks easy; in combination with the DTE compression, using 3 of the existing empty ROM banks should be enough space to hold double the original text. Thank you for this! From earlier posts I was looking at the amount of space going, with the first two games so far, I'll probably need DTE at least. For control codes, here's what I had (I think the A* ones were not used in the main script, but you can confirm that): I had heard about the mostly gender neutral, but infamous "boy" dialog in the Japanese version as well. This is great for me because the mobile script just gives the female hero a separate string, while there's some like below where they ADD dialog for Female Hero, there's a lot of duplicate lines. Code: [Select] My father was always telling me stories about the mighty hero Ortega.<10 And now here I am working in my dad's place, and there you are adventuring in yours.<10 It's true what they say, isn't it? Like father, like son!<41<63 Well, daughter in your case!<64<0F I think it was something the Japanese script did to help correct said dialog problem in the original. Also I just saw as I'm typing this there ARE cases where the gender is adjusted for strings. Weird Code: [Select] We expected no less of you, <41son<63daughter<64 of Ortega! We have witnessed the birth of a true hero! So I finally translated the biggest RAM mapping work I have done for these games, the DQ3 GBC version. I have been stocking this here for more than half a decade, always forget to post this massive stuff but when I saw this topic, flashes happened! I was going to do the NES and SNES versions too, but time is short plus it shares many traits with them. The work is now available at Data Crystal, enjoy! (https://datacrystal.romhacking.net/wiki/Dragon_Quest_III_%28Game_Boy_Color%29:RAM_map) Off: it is about time to update those templates! Hey thanks for this! I still plan on tackling the GBC remakes down the line. So, the more info on III, the better! So abw, sorry to bring up DW2, I really hope this is the last thing. But I allllmost got the code right for changing battle dialog to: 1: Ignore counting the last few bosses 2: Replace the multiple enemy counts with "Some" and "A/An" 3: Change the extra groups that appear to "And/AND some [monster]" here's my code Code: [Select] norom ; stop Asar from trying to apply SNES memory mapping to this NES code org$00BF00 ; set the ROM file insertion point base $BEF0 ; set the starting RAM address LDA$0161 ; monster ID for the current group CMP #$4E ; bosses have IDs >= #$4E (so does the "Enemies" monster, but that's not a monster ID you can encounter) BCS no_change LDY $8F ; number of monsters in group DEY ; count from 0 instead of 1 BEQ one ; 0 => only one monster => handle "A" vs "An" LDX #$00    ; read index LDY #$00 ; write index loop: LDA some,X ; Monster Counts text STA$60F1,Y ; start of text variable buffer INY INX CMP #$FA ; [end-FA] BNE loop ; if not end token, keep copying done: SEC ; SEC to trigger read of [end-FA]-terminated string from$60F1, CLC to use A RTS some: db $36,$18,$16,$0E,$FA ;"Some" not using a table here one: ; at this point we know Y = 0 LDA #$24 ; "A" STA $60F1,Y ; start of text variable buffer LDA$6119 ; first letter of monster name CMP #$24 ; "A" BEQ an CMP #$28 ; "E" BEQ an CMP #$2C ; "I" BEQ an CMP #$32 ; "O" BEQ an CMP #$38 ; "U" BNE no_change an: LDA #$17 ; "n" INY STA $60F1,Y ; start of text variable buffer no_change: LDA #$FA ; [end-FA] STA $60F1,Y ; start of text variable buffer BNE done no_cardinal: LDA #$FA ; [end-FA] the game will handle trimming the empty space from the STA $60F1 BNE done The thing that has me stumped is the A/an part. In battle if I get a single enemy with a consonant letter I get, for example _ Dracky where the underscore is an extra space the A should go. Yet getting a single Iron Ant gives me "A Iron Ant" (to my credit, I was getting nothing but garbage before, so this has come a long way before this post) Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on October 14, 2019, 12:07:39 pm @abw Thank you for the very detailed breakdown of how the compressed script works. This was helpful but there are still some things I need to understand before I can do this. First, I did figure out the issue with viewing kana in WindHEX. It's essentially what you all were saying. I needed to use a software like Notepad++ to change the coding from UTF-8 variants to Shift-JIS. Interestingly, when I change the encoding to the latter, it makes the Japanese characters unviewable to me in the table files so not a good situation for editing. Perhaps I need to download something for windows to allow my PC to read Shift-JIS? The kana become things like x82xAD, etc. When I take that same garbled table file, load it in windhex and view data in Japanese it actually works. Let me qualify that: it works in a non compressed Japanese roms like DQ2, but not compressed ones like DQ3 or DQ4. When I try to load your included hira table file (reencoded to Shift-JIS) for DQ4 into the game, it doesn't show me anything at all--probably because Windhex isn't equipped to view data in that 6 bit format. With that being the case, I don't know how I'm going to be able to locate certain things like start address of the script. I would imagine it starts at the beginning of a rom bank and if you told me what the address of the first script bank was, I could probably use the pointers to figure out how many banks hold text and get the rest of the info I need. Well, I might need the address of the final script pointer since you only gave me the first. :P The other thing I need help with is understanding some aspects of your DQ4 example files. The cartographer.txt file you included has all the different sections for different parts of the script (all with mysteriously different string lengths) but the only table file that gets referenced is the hira.tbl. You explained how the switch bytes cause the game to switch between the two character sets and the dictionary but you didn't seem to explain how abcde is going to do the switching. Since I don't see any table switching instructions in the cartographer file, I also looked at the table files themselves and the only thing that might be the instructions are !%111100=,<@katakana>:0 !%1111=,<@dictionary>:1 I'd think that these were the switching instructions if it wasn't for the fact that the table names aren't included here. But I do notice theres a little @katakana symbol at the top of the kata.tbl file. Is that the reference point that this instruction points at allowing the switching to happen during extraction? Well, that's the extent of my questions for now. I truly wish I was better at figuring these kinds of things out on my own. I am certainly trying my hardest before coming back to you. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on October 14, 2019, 10:11:43 pm The thing that has me stumped is the A/an part. In battle if I get a single enemy with a consonant letter I get, for example _ Dracky where the underscore is an extra space the A should go. Yet getting a single Iron Ant gives me "A Iron Ant" Have you tried stepping through the code in a debugger to follow along with exactly what it's doing and where it's going wrong? Spoiler: It looks like the Y values aren't getting updated correctly between writes to$60F1,Y, so the last letter you write is getting clobbered by $FA; making sure there's an INY after each write should help. I don't know how I'm going to be able to locate certain things like start address of the script. If you want to play along, Trace Logger + Debugger tells all. It looks like the main part of the text engine code starts somewhere around$0E:$B9D1, so that would be a good place for an execute breakpoint. The bank specified by$A8 gets swapped in before reading from the script and the engine setup code sets $A8 to either #$10, #$11 (via INC$A8), or #$02 based on the string index, so those banks ought to match up with the script pointers. You explained how the switch bytes cause the game to switch between the two character sets and the dictionary but you didn't seem to explain how abcde is going to do the switching. No, that part gets covered in abcde's readme where I spend several hundred words and many examples explaining how table file switching works by default and various ways you can control specific details of the switching process in order to mirror your game's behaviour :P. The following excerpt is particularly relevant here: Quote from: readme.txt # matchType says how many matches to make in the new table before falling back to the current table: # * 0 => keep going as long as you can; # * X => make exactly X matches in the new table, where X is any positive decimal integer; # * -1 => fall back right away; I'd think that these were the switching instructions if it wasn't for the fact that the table names aren't included here. You can also find where the table files get referenced by looking at the example insert batch file / shell script: Quote from: Atlas.bat perl ..\..\..\abcde.pl -m text2bin -t hira.tbl -t kata.tbl -t dict.tbl -cm abcde::Atlas Atlas.nes Atlas.txt Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on October 20, 2019, 01:31:02 am Have you tried stepping through the code in a debugger to follow along with exactly what it's doing and where it's going wrong? Spoiler: It looks like the Y values aren't getting updated correctly between writes to$60F1,Y, so the last letter you write is getting clobbered by $FA; making sure there's an INY after each write should help. So I was in fact able to figure out on my own that, yeah, something was going on with reading the first letter of the Monster name (somehow it was reading an M instead of an I in Iron Ant) But I had no idea it was because of a missing Increment (INY). Thank you so much. Am I correct in understanding that Y, in this case, is basically the "cursor" in reading and printing letters/strings and that's why you need to increment it after a printed letter? Also, in the other thread when I brought up rewriting the dialog into present tense instead of past tense, namely for "[monster] appear/s" you suggested using the "Special S" but I quickly realized unlike nouns the usage of s in verbs is reversed. Unlike 1 Coin/10 Coins, you have 1 monster appears/3 monsters appear (I even thought, "Oh! Draws near! Wait, no, that's the same thing".) I just want to ask if there was something you were thinking of that I'm not thinking of otherwise I think it's a pretty negligible sacrifice to use the past tense "appeared". Sure am glad to see DW3 seems to use present tense "appear" and "appears". :) Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on October 20, 2019, 04:19:59 pm Am I correct in understanding that Y, in this case, is basically the "cursor" in reading and printing letters/strings and that's why you need to increment it after a printed letter? Yup, in that particular code, Y is used as the write index: "STA$60F1,Y" means to store the current value of A to the address $60F1 + Y, so when Y is 0, that$60F1, when Y is 1, that's $60F2, etc. If you open the Hex Editor, load the appropriate DW2 table file, and scroll down to$60F1, you can step through the code and watch the string being assembled there. Also, in the other thread when I brought up rewriting the dialog into present tense instead of past tense, namely for "[monster] appear/s" you suggested using the "Special S" but I quickly realized unlike nouns the usage of s in verbs is reversed. Unlike 1 Coin/10 Coins, you have 1 monster appears/3 monsters appear (I even thought, "Oh! Draws near! Wait, no, that's the same thing".) I just want to ask if there was something you were thinking of that I'm not thinking of otherwise I think it's a pretty negligible sacrifice to use the past tense "appeared". Sure am glad to see DW3 seems to use present tense "appear" and "appears". :) Hmm, yes, that is a good point. If you have enough space, you could add a new control code that does the opposite of what that $F2 does, i.e. prints "s" when$8F-$90 is 1 instead of not 1, and then use that in your script. But using "appeared" instead of "appear"/"appears" is also fine too. In other news, I noticed on TCRF (https://tcrf.net/Dragon_Warrior_II_(NES)) that Cannock Castle also had a map change between the Japanese and English releases, so I figured I'd make a restoration patch for that too, and while I was at it, I figured I might as well also upload some patches for DW1 and DW2, so hopefully those will be approved shortly! Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on October 22, 2019, 12:02:59 am In other news, I noticed on TCRF (https://tcrf.net/Dragon_Warrior_II_(NES)) that Cannock Castle also had a map change between the Japanese and English releases, so I figured I'd make a restoration patch for that too, and while I was at it, I figured I might as well also upload some patches for DW1 and DW2, so hopefully those will be approved shortly! Yeah I saw that, it's really minor change that just makes you go "Why?" There's a lot more in DW3, but the Data Crystal entry says map graphics are essentially like text. I would assume they're done in rows so I hope it turns out to be easier to edit. So, abw, I did a "fresh" rip of DW2 with abcde v5. Copied my script over and changed the appropriate variables and there's two weird things. While my previous Atlas.txt was kinda all over the place, my new one is more in line with the rip. First: It ripped the Prologue and everything else fine, but when I try to re-insert it says it can't tolkenize spaces and points to the first space I have there (Left side of the first line). I realized I needed to add the Table line to make sure it uses the spacer.tbl file but the Atlas readme isn't too insightful on how to use different tables I have it as: Code: [Select] #VAR(nonScriptTBL, TABLE) #ADDTBL("spacer.tbl", main) // or whatever you named your table file #ACTIVETBL(main) I'm sure it's something painfully obvious I'm missing. If I don't add the main part it says it tries to call a table that doesn't exist, and when it's added it says main table unknown. @_@ Second: As I said in a previous post, I incorporated your full name hack (thank you again for that :))and I ripped the script in the menus to see what you did. I see that the new value$9C is for names in battle with trailing spaces, but I'm having trouble with the new field menu party list. YOUR Script shows for the beginning solo Midenhall status like this Code: [Select] ═NOMEN═════GR══SL══LM[border line] <$5F>[Midenhall's short name] [Midenhall's level]<$5F>[Midenhall's current HP]<$5F>[Midenhall's current MP][line] I think something might be going wrong with the fact that NORMALLY$5F and $81 are both spaces. I guess I'm stumped because I expected some kind of new value instead of "Midenhall's short name". Does the$5F really do something different here needing a change to the menu_params.tbl? EDIT: Oh! I see your patches have just been approved! Maybe I'll use the menu improvement patch instead of trying to reverse engineer it from your Latin translation! EDIT2: Between your Latin version and seeing how your new menu improvement patch works, I'm getting really turned around from the pointer values jumping back and forth. Makes it super confusing to edit! I totally see what you did but I just need to know: If I rip that part of the script with abcde/cartographer, with it's random Pointer Jumps (going from pointer #73 to Pointer #8) and edit the menus will Atlas still insert it properly? I'm curious how you even went in and said "Okay $7694 is practically the same as this menu pointed to$7690 so just point to that one"?(within something like abcde, I get how you'd do it manually) EDIT3: Not a question, just a minor observation, but I'm surprised with your battle menu edit, you didn't just move the command menu to the left a couple of spaces over so that the monster name window doesn't overlap with the command window. That's what I did *shrug* Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on October 22, 2019, 08:36:51 pm Yeah I saw that, it's really minor change that just makes you go "Why?" Yup, that's pretty much exactly what I said when I saw that too :P. There's a lot more in DW3, but the Data Crystal entry says map graphics are essentially like text. I would assume they're done in rows so I hope it turns out to be easier to edit. Let's hope so - editing DW2's maps is only fun in a masochistic sort of way :(. I realized I needed to add the Table line to make sure it uses the spacer.tbl file but the Atlas readme isn't too insightful on how to use different tables I have it as: Code: [Select] #VAR(nonScriptTBL, TABLE) #ACTIVETBL(main) I'm sure it's something painfully obvious I'm missing. If I don't add the main part it says it tries to call a table that doesn't exist, and when it's added it says main table unknown. @_@ The variable name you create with #VAR has to match up with the variable name you use in #ADDTBL and #ACTIVETBL. In my insert script, I just went with a plain old boring: Code: [Select] #VAR(Spacer, TABLE) #ACTIVETBL(Spacer) but you can use whatever names you want as long as you use them consistently. I see that the new value $9C is for names in battle with trailing spaces In the menu patch I uploaded separately, menu control code$9C now prints all 8 bytes of the current hero + 1's name with both $5F and$60 spaces replaced by top borders. The version I used in my translation is a bit different - it only replaces $60 spaces, which worked since I changed Cannock/Moonbrooke's names to use$60 instead of $5F (matching Midenhall, who also uses$60 spaces). EDIT2: Between your Latin version and seeing how your new menu improvement patch works, I'm getting really turned around from the pointer values jumping back and forth. Makes it super confusing to edit! I totally see what you did but I just need to know: If I rip that part of the script with abcde/cartographer, with it's random Pointer Jumps (going from pointer #73 to Pointer #8) and edit the menus will Atlas still insert it properly? I'm curious how you even went in and said "Okay $7694 is practically the same as this menu pointed to$7690 so just point to that one"?(within something like abcde, I get how you'd do it manually) I'm not sure what you're saying here - I did re-order how the menu data is stored, but the menu pointed at by $7694 is still the same menu in both the original and patched versions (I've dubbed it "Menu ID #$21: Mini status window, top, Midenhall + Cannock + Moonbrooke"), and it's still a separate menu from the menu pointed at by $7690 ("Menu ID #$1F: Mini status window, bottom, Midenhall + Cannock + Moonbrooke"), which is likewise also the same menu in both cases. If it bothers you that much you can re-sort the menus to be in the same order as their pointers; I just found the original ordering to be both counter-intuitive and counter-productive for editing purposes, so I changed it. EDIT3: Not a question, just a minor observation, but I'm surprised with your battle menu edit, you didn't just move the command menu to the left a couple of spaces over so that the monster name window doesn't overlap with the command window. That's what I did *shrug* After trying it both ways for a while, I eventually decided I liked the overlapping menu effect more. I was trying to keep everything within the bounds of the 24-tile main dialogue window, and mostly succeeded, except the 16-tile equipment list and 10-tile attack/defense power list didn't quite fit and moving the attack/defense power window just looked weird. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Swordmaster on October 23, 2019, 05:05:45 am In other news, I noticed on TCRF (https://tcrf.net/Dragon_Warrior_II_(NES)) that Cannock Castle also had a map change between the Japanese and English releases, so I figured I'd make a restoration patch for that too, and while I was at it, I figured I might as well also upload some patches for DW1 and DW2, so hopefully those will be approved shortly! Just wanted to clarify that the original location of Cannock/Samaltria was changed before even the Japanese release.  'Twas a casualty of last minute balancing before release.  And we all know how perfectly balanced the finished product is.   :laugh: Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on October 23, 2019, 03:27:27 pm I'm not sure what you're saying here - I did re-order how the menu data is stored, but the menu pointed at by $7694 is still the same menu in both the original and patched versions (I've dubbed it "Menu ID #$21: Mini status window, top, Midenhall + Cannock + Moonbrooke"), and it's still a separate menu from the menu pointed at by $7690 ("Menu ID #$1F: Mini status window, bottom, Midenhall + Cannock + Moonbrooke"), which is likewise also the same menu in both cases. If it bothers you that much you can re-sort the menus to be in the same order as their pointers; I just found the original ordering to be both counter-intuitive and counter-productive for editing purposes, so I changed it. Sorry, allow me to simplify: -I ripped the menu dialog using the defualt example script provided -This makes the output text jump to pointers randomly. Instead of Pointer 0:Text, Pointer 1:Text, Pointer 2:Text, etc, this causes the output file to Pointer 0:Text, Pointer 30: text, Pointer 31: Text, Pointer 9: Text, etc. -I also noticed with the way the text was reordered a lot of the pointers would just display a lot of the same dialog -This was tricky to figure out with your Latin translation because I would see stuff like CUI and go "Crap, which menu does this refer to?" After using Google Translate and playing the patched rom I was able to figure it out. What it really comes down to is: Since there's a separate patch for the English version now, I just need to change a couple letters like Armor->Armour, the Healer/Chruch Menu etc. Is it safe to edit and re-insert the text like this or am I going to need to take a different approach? Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on October 23, 2019, 10:09:16 pm -I ripped the menu dialog using the defualt example script provided Keep in mind the example script is for the original game, so if you're going to re-dump from the patched ROM, you'll need to adjust the #SCRIPT STOP to $7F3D otherwise the last few bytes of menu data will get cut off. -This makes the output text jump to pointers randomly. Instead of Pointer 0:Text, Pointer 1:Text, Pointer 2:Text, etc, this causes the output file to Pointer 0:Text, Pointer 30: text, Pointer 31: Text, Pointer 9: Text, etc. Yup. And? As long as the pointers keep pointing to the right thing, it mostly doesn't matter what order the things are arranged in (on the NES, anyway). -I also noticed with the way the text was reordered a lot of the pointers would just display a lot of the same dialog I'm not sure what you're seeing here - no two of the menu pointers point to the same string. -This was tricky to figure out with your Latin translation because I would see stuff like CUI and go "Crap, which menu does this refer to?" After using Google Translate and playing the patched rom I was able to figure it out. Yeah, it took a little while to track down which menu was which, particularly with all those WHOM menus. If you check the commented disassembly (http://datacrystal.romhacking.net/wiki/Dragon_Warrior_II::ROM_map/ASM_bank_01), the menu pointer table at$01:$B642 is a handy reference for stuff like that. What it really comes down to is: Since there's a separate patch for the English version now, I just need to change a couple letters like Armor->Armour, the Healer/Chruch Menu etc. Is it safe to edit and re-insert the text like this or am I going to need to take a different approach? Editing and re-inserting should be fine... the only thing to watch out for is that the code for printing bordered names starts at$7F40, so if you need to use any of that space, you'll have to update the ASM file to move the border code somewhere else and then re-assemble it. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on November 16, 2019, 04:11:11 pm Keep in mind the example script is for the original game, so if you're going to re-dump from the patched ROM, you'll need to adjust the #SCRIPT STOP to $7F3D otherwise the last few bytes of menu data will get cut off. Yup. And? As long as the pointers keep pointing to the right thing, it mostly doesn't matter what order the things are arranged in (on the NES, anyway). I'm not sure what you're seeing here - no two of the menu pointers point to the same string. Yeah, it took a little while to track down which menu was which, particularly with all those WHOM menus. If you check the commented disassembly (http://datacrystal.romhacking.net/wiki/Dragon_Warrior_II::ROM_map/ASM_bank_01), the menu pointer table at$01:$B642 is a handy reference for stuff like that. Editing and re-inserting should be fine... the only thing to watch out for is that the code for printing bordered names starts at$7F40, so if you need to use any of that space, you'll have to update the ASM file to move the border code somewhere else and then re-assemble it. Thanks abw, I've procrastinated quite a bit, but it's coming together nicely now. I'm just having a weird issue with abcde/atlas. I fixed the table issue I mentioned above, but now it gives me the message Code: [Select] table 'C:\Users\mog11\Downloads\abcde_v0_0_5\eg\NES\Dragon Warrior II\spacer.tbl' contains a control code table switch parameter to non-existant table 'main' at C:\Users\mog11\Downloads\abcde_v0_0_5/abcde/Table/Table.pm line 111. at Atlas.txt line 0! I thought I'd try to start fresh with a new unzipped copy of abcde, but there I get Code: [Select] attempt to write beyond $7F3D at C:\Users\mog11\Downloads\abcde_v0_0_5c/abcde/Atlas.pm line 476, <COMMAND_FILE> line 3088. variable 'Table' has already been declared at Atlas.txt line 3088! Line 3088 Is the Credits part of my Atlas.txt which is setup as #VAR(Credits, TABLE) #ADDTBL("credits_spacer.tbl", Credits) #ACTIVETBL(Credits) Another weird thing is that after inserting the new Prologue the game just skips it (Start new game > Fades to black > Moonbrooke cutscene) I figured maybe I have a a couple of lines too long. I thought it would be like doing the title screens in DW1. Room for 32 spaces with an even amount of spaces on either side of the text to center it as long as it adds up to 32. Could that not be the case here? Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on November 17, 2019, 01:58:49 pm @abw I thought I'd provide my own miserable update. The last we left off, you referred me to the table switching section of abcde's documentation. Over the last few weeks I've made numerous attempts to digest it. "It" being copied below: Quote # Table switch entries are new (and/or improved, depending on what you're already familiar with); this can get a bit complicated, so let's add some more table files and see some examples. # The table switch entry format looks like this: !lhs=<label>,<@table ID>:matchType,<@table ID>:matchType,<@table ID>:matchType,... # Where <label> is an optional label for the control code that, if provided, appears in text output, # <@table ID>: is an optional table ID that will be used to continue matching; if you don't provide a table ID, one of two things will happen: # * if you don't provide any value here, you get raw hexadecimal-encoded output; # * if you provide the special value <binary> (note the lack of "@"), you get raw binary-encoded output; # matchType says how many matches to make in the new table before falling back to the current table: # * 0 => keep going as long as you can; # * X => make exactly X matches in the new table, where X is any positive decimal integer; # * -1 => fall back right away; # *$hex or %bin => keep going until you match $hex or %bin again (the "$" or "%" are required here so that we can tell whether 10 means ten, sixteen, or two). # Once the matching condition for the table that was switched into has been satisfied, translation will continue with the table that did the switching. # # For table entries that require exactly X matches, the default match counting behaviour is for each matched table entry in the new table to count as 1 match. # However, different games can count matches in different ways, so abcde provides support for modifying the counting behaviour in a couple of ways: # * you can change the number of matches a table entry counts as by suffixing the left-hand side of that entry with "<Y>" for any non-negative decimal integer Y; the table entry will then be counted as Y matches instead of 1 match; # * you can do the same thing with matchType when using $hex or %bin; # * matchType can be followed by a "+" to indicate that matches in the new table should also contribute towards the number of required matches in the current table. In short, your explanation reads to me like something written from a programmer to a programmer, and I personally can't make heads of tales of it. I was able to follow everything above that section of the doc, probably due to A: being already knowledgeable on the topics, and B: the concepts being simpler in general. I feel like you need a whole separate set of instructions written for the laypeople (if laypeople ultimately have any business trying to take on these kinds of projects in the first place.) What might be a lot more helpful to me is attempting to simply copy the examples you've provided from DQ4. I may try that in a few days with my next opportunity to be alone in a quiet house. I do want to acclimate myself to actually being able to read these kinds of instructions, but I feel like I need to get a hold of some books that provide a general introduction to programming before that is possible. It's a shame to feel this stuck when so much work has been done for the DQ3 re-translation. Maybe I return to the idea of grinding for all the 1/256 rare drops that we ultimately need the Japanese text from. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on November 24, 2019, 07:39:17 pm Sorry for leaving you both hanging - I've been offline again for the past little while. I fixed the table issue I mentioned above, but now it gives me the message Code: [Select] table 'C:\Users\mog11\Downloads\abcde_v0_0_5\eg\NES\Dragon Warrior II\spacer.tbl' contains a control code table switch parameter to non-existant table 'main' at C:\Users\mog11\Downloads\abcde_v0_0_5/abcde/Table/Table.pm line 111. at Atlas.txt line 0! That means the 'spacer.tbl' file wants to be able to switch to the table whose ID is 'main', but none of the table files that got loaded have a 'main' ID (@main), so you're missing at least one table file. I thought I'd try to start fresh with a new unzipped copy of abcde, but there I get Code: [Select] attempt to write beyond$7F3D at C:\Users\mog11\Downloads\abcde_v0_0_5c/abcde/Atlas.pm line 476, <COMMAND_FILE> line 3088. variable 'Table' has already been declared at Atlas.txt line 3088! That looks like a) whatever text you were trying to insert (if it was capped at $7F3D, it's probably the menu text) was too long and b) you tried to declare 'Table' as a variable name multiple times (i.e. you have multiple "#VAR(Table, ****)" commands where **** is the type, which I'm guessing is probably TABLE). Another weird thing is that after inserting the new Prologue the game just skips it (Start new game > Fades to black > Moonbrooke cutscene) I figured maybe I have a a couple of lines too long. Hmm, that one does sound odd. Assuming you haven't (possibly accidentally) made any code changes, my first guess would be that your new prologue text starts with an end token, causing the game to skip ahead to the Moonbrooke cutscene, though I haven't spent much time looking at bank 7, so it could easily be something else. The text length shouldn't be an issue; I haven't tried it, but I think the game just keeps printing text until it reaches an end token. I thought it would be like doing the title screens in DW1. Room for 32 spaces with an even amount of spaces on either side of the text to center it as long as it adds up to 32. Could that not be the case here? If you look at spacer.tbl, you'll see the first byte controls the left indent (if you need an indent outside the 1 - 15 range, you can add it to the table file yourself), and then text is stored using the main (non-script) 8-bit encoding, with lines terminated by$FF, so that's quite different than the DW1 title screen, which was bascially a 32x28 rectangle with some RLE. In short, your explanation reads to me like something written from a programmer to a programmer, and I personally can't make heads of tales of it. That was actually my sad attempt at end-user documentation right there - I was hoping the examples would flesh out the technical descriptions, but apparently not :P. Table switching is by far the most complicated thing about table files, and if you're going to be writing your own, you need to know the gory details. If a basic table entry looks like "03=cat" and means that the byte 03 corresponds to the text "cat", then one kind of table switch entry looks like "!00=<[Animal Name]>,<@animals>:1". That means that the byte 00 triggers a switch to a new table with the ID "animals", reads exactly 1 match from that table and then reverts to the current table (the one that contains that "!00=<[Animal Name]>,<@animals>:1" entry), and that the text "[Animal Name]" appears in the extracted text immediately before whatever got matched in the "animals" table. So if your main table (which I'm going to call "main") and "animals" table look like this: Quote @main !00=<[Animal Name]>,<@animals>:1 80=main @animals 03=[cat] 04=[dog] then the bytes 000380 would correspond to the text "[Animal Name][cat]main" and the bytes 000480 would correspond to the text "[Animal Name][dog]main". But different games implement table switching in different ways and you might not want to see all the switches listed in your text, so there are lots of options. First off, the "label" part is optional, so you could say: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 80=main @animals 03=[cat] 04=[dog] and then the bytes 010380 would correspond to the text "[cat]main". You might want to read exactly 2 or exactly 3 matches from the "animals" table instead, so you could say: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] and then the bytes 02030480 would correspond to the text "[Animal Name][cat][dog]main" and the bytes 0303040580 would correspond to the text "[Animal Name][cat][dog][parakeet]main". You can do that for any positive number of matches, not just 1, 2, or 3. If you want to read exactly 1,000,000 matches from the "animals" table, go for it: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] and then the bytes 04... would get you a really long text! On the other hand, maybe you don't want to read a predetermined exact number of matches. Maybe you want to keep reading animal names until you come across a certain terminator, such as if your list of animals was terminated with a FF byte: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] In that case, the bytes 050304FF80 would correspond to the text "[Animal Name][cat][dog]main". Or maybe there is no explicit terminator and you want to keep reading animal names until you come across something that doesn't match any known animal: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF !06=<[Animal Name]>,<@animals>:0 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] and then the bytes 06030480 would correspond to the text "[Animal Name][cat][dog]main". Another way of accomplishing the FF-terminated example is this: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF !06=<[Animal Name]>,<@animals>:0 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] !FF=<>,-1 and then the bytes 060304FF80 would also correspond to the text "[Animal Name][cat][dog]main". You're also not limited to switching to just one table; if a 07 byte means that the game reads an animal name and then a colour, you could represent that as: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF !06=<[Animal Name]>,<@animals>:0 !07=<[Animal Name/Colour]>,<@animals>:1,<@colours>:1 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] !FF=<>,-1 @colours 00= and then the bytes 07030080 would correspond to the text "[Animal Name/Colour][cat]main". And then you can play with how many matches each table entry counts as; if there's something special about elephants and they count for twice as much as cats or dogs, you could represent that as: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF !06=<[Animal Name]>,<@animals>:0 !07=<[Animal Name/Colour]>,<@animals>:1,<@colours>:1 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] 06<2>=[elephant] !FF=<>,-1 @colours 00= and then the bytes 020680 would correspond to the text "[Animal Name][elephant]main". You're also not limited to switching only one table deep; with tables like these: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF !06=<[Animal Name]>,<@animals>:0 !07=<[Animal Name/Colour]>,<@animals>:1,<@colours>:1 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] 06<2>=[elephant] !07=<[Dog Breed]>,<@dogbreeds>:1 !FF=<>,-1 @colours 00= @dogbreeds 01=[Chihuahua] the bytes 00070080 would correspond to the text "[Animal Name][Dog Breed][Labrador]main". Counting can also be transitive, so if you needed to include matches in the "dogbreeds" table when counting matches from the "main" table, you could say: Quote @main !00=<[Animal Name]>,<@animals>:1 !01=<>,<@animals>:1 !02=<[Animal Name]>,<@animals>:2 !03=<[Animal Name]>,<@animals>:3 !04=<[Animal Name]>,<@animals>:1000000 !05=<[Animal Name]>,<@animals>:$FF !06=<[Animal Name]>,<@animals>:0 !07=<[Animal Name/Colour]>,<@animals>:1,<@colours>:1 80=main @animals 03=[cat] 04=[dog] 05=[parakeet] 06<2>=[elephant] !07=<[Dog Breed]>,<@dogbreeds>:1 !08=<[Dog Breed]>,<@dogbreeds>:1+ !FF=<>,-1 @colours 00= @dogbreeds 00=[Labrador] 01=[Chihuahua] and then the bytes 02080080 would also correspond to the text "[Animal Name][Dog Breed][Labrador]main", where "[Labrador]" gets counted as a match against both the "animals" -> "dogbreeds" switch and the "main" -> "animals" switch. Maybe I return to the idea of grinding for all the 1/256 rare drops that we ultimately need the Japanese text from. Instead of spending hours grinding for rare drops from multiple rare enemies, some other approaches include finding where your party's items are stored in RAM and editing the desired items into your inventory, finding the drop rate data and pumping all the drop rates up to the maximum, or finding the code that decides whether you get a drop or not and forcing it to always give a drop. Actually obtaining the items in-game gives you the benefit of seeing the mystery strings in more context than a text dump gives, so you can observationally verify which items use which strings rather than having to wade through the code or make questionable assumptions such as the order of item text being in any way related to the order of the items. As for my little update, it turns out that DW3 uses 8 different control codes for various English monster name pluralization rules, but 1 of them is a "do nothing" rule and 3 of them are wrong, so really it only needs 4 codes. For translation, however, it took me a fair bit of work to get down to 14 codes... looks like I've got some more ASM work in my future! Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on December 07, 2019, 12:35:14 pm @abw Thank you as always. I wanted to provide a different kind of update. I've been taking some time off from working specifically on the steps I'm at with this project in order to focus on learning the basics of 6502. I know I've said that for awhile, but I've finally made it an element of my schedule where I spend some time doing learning and exercises every day. I'm very surprised to find that if I had a shit day or even had a few drinks that evening, the daily consistency still allows me to continue to make distinct progress. Virtually all the problems we've been facing lately have required me to have some kind of ASM / debugging knowledge so I thought it was time to make that the focus. It's not like I have more than 3 people clamoring for a release date of DQ3 Delocalized anyway. :laugh: I'm sure I'll have questions in the future, but hopefully they will be at a higher level and I won't require these 3 page answers. See ya'll soon. :beer: Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on December 08, 2019, 09:45:56 pm No worries! Glad to hear the 6502 learning is progressing well - as with most things in life, practice brings improvement :thumbsup:. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on January 07, 2020, 10:09:24 pm Okay big update from me. After a bunch of frustration and procrastination, I figured out and fixed the kinks that were giving me trouble. If some newbie romhacker (or maybe just new to abcde) is reading this, then you might want to give this a nice long read! One thing I was missing before that I stupidly thought atlas/cartographer/abcde handled by itself before was adding #JMP commands before every block. Looking at an unaltered ROM I was noticing that the Prologue was being inserted directly AFTER the menu text. So yeah #JMP($Address) or #JMP($Start Address,$End Address) is very important! Okay now maybe I messed up somewhere and this works perfectly fine for abw and anyone else, but I was still getting an error message that there was a duplicate table ID, @main. Again, if there's newbie romhackers using this, make sure your atlas.bat has all the tables you need! I double checked and the only table I had that was labelled @main was dw2.tbl. I fixed it and found success by changing the table used in various menus (directions, names, etc) from dw2.tbl to menu.tbl and editing menu.tbl to add values FA=[End-FA] and FF=[End-FF]. Worked like a charm! Then there was the Prologue. First I was getting instances where it just completely skipped the text and now it was hanging on a blank screen doing nothing. Drove me nuts, I thought maybe my lines were too long or something. abw I love you, but I think your spacer.tbl is wonky. I think there's a genuine typo in your DW2 Cartographer.txt as well. After comparing an untouched ROM with one that's been altered I FINALLY noticed that Atlas/abcde was changing the pointer at $1CAB2 from 8A to 0A which is silly because there's only one string in the Prologue and I'm not changing its location. Now correct me if I'm wrong, but I think this is because the base pointer in cartographer.txt is listed at 14010 when it should be 1C010. At least that's the only reason I can think of that the pointer value is 8000 lower than it should be. Have to admit, I've figured that part out, but don't know how to fix it, I'm just manually editing$1CAB2 after I do an insertion. Okay so back to spacer.tbl. Once I figured out the pointer problem I was getting text in game, but it would freeze when it got to my third line in the first stanza. It's supposed to be this Code: [Select] Once upon a time, a young hero descended from the legendary warrior Erdrick defeated the peace and light to the world. Yeah those are some long lines, I figured even if you have to do a total of 32 lines (number left spaces*2 + number of letters in a line) I tried cutting it down just for testing sakes and got this Code: [Select] Once upon a time, a young hero descended from the warrior Erdrick defeated and restored peace and light to the world. And what's weird, this is how it looked in game (https://i.imgur.com/esKiWQc.png) And I looked in the ROM and instead of having $07 for 7 spaces it put in$0F and $5F. I mean this doesn't mean I'm stuck I'll probably just use Pointer Tables since I don't have to worry about Pointers here, but I figured you should know. Also, thanks Chicken Knife for giving me your latest Atlas.txt you used. I thought maybe you had a working "complete" version (menus, names, and everything) so I could see what was wrong with mine, but it was still helpful. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Chicken Knife on January 08, 2020, 10:08:29 am I actually have all the menu stuff on a separate atlas file. I really only needed that for one purpose (fixing SHILD) so I didn't bother combining it all together. I can send you the separate file tonight. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on January 08, 2020, 11:08:01 pm One thing I was missing before that I stupidly thought atlas/cartographer/abcde handled by itself before was adding #JMP commands before every block. Good news, I've made some upgrades for the eventual v0.0.6 release - the Cartographer code will now output #VAR/#ADDTBL/#ACTIVETBL and #HDR/#JMP commands for you! Again, if there's newbie romhackers using this, make sure your atlas.bat has all the tables you need! v0.0.6 might also help with this - there was a line buried in the abcde::Atlas help text about how you were supposed to load all your table files on the command line instead of inside the Atlas command file, but I've cleared that up now. I fixed it and found success by changing the table used in various menus (directions, names, etc) from dw2.tbl to menu.tbl and editing menu.tbl to add values FA=[End-FA] and FF=[End-FF]. Worked like a charm! Be careful with this - if you try inserting 2 or more adjacent spaces with the menu table, you'll get the menu's$82 - $87 control codes, which will quite possibly result in a mess when non-menu code encounters them. abw I love you, but I think your spacer.tbl is wonky. ;D I think there's a genuine typo in your DW2 Cartographer.txt as well. After comparing an untouched ROM with one that's been altered I FINALLY noticed that Atlas/abcde was changing the pointer at$1CAB2 from 8A to 0A which is silly because there's only one string in the Prologue and I'm not changing its location. Now correct me if I'm wrong, but I think this is because the base pointer in cartographer.txt is listed at 14010 when it should be 1C010. For the prologue, I've got #JMP($1CAC2,$1CC22) and #HDR($14010) in both my translation's actual insert script and abcde's example, and both of those commands are correct. The value of the pointer at$1CAB1 is $8AB2, and$8AB2 + $14010 =$1CAC2. I also get $8AB2 as the pointer value when running abcde's example insert script, so I think something else is going on here. Any chance you could send me a link to your insert script so I can take a look? I tried cutting it down just for testing sakes and got this I notice your quote has a bunch of trailing spaces on the second line ("hero "); my guess is that those are probably what's causing the issues on the third line. It looks like the prologue display code might only handle 30 bytes per line, so 32 bytes could cause some weirdness - what happens if you take those extra spaces out? I thought maybe you had a working "complete" version (menus, names, and everything) so I could see what was wrong with mine, but it was still helpful. I probably had a complete insert script when I started my translation, but it got cannibalized along the way and has been altered to work with all the ASM changes I made, so it doesn't really apply to the original script very well any more. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: Choppasmith on January 10, 2020, 12:56:39 pm Good news, I've made some upgrades for the eventual v0.0.6 release - the Cartographer code will now output #VAR/#ADDTBL/#ACTIVETBL and #HDR/#JMP commands for you! Nice! That'll be a huge help! Quote Be careful with this - if you try inserting 2 or more adjacent spaces with the menu table, you'll get the menu's$82 - $87 control codes, which will quite possibly result in a mess when non-menu code encounters them. Well the only thing I really need dw2.tbl for anyway is stuff like Crests and Moonbrooke/Cannock names, and those can just be edited manually if need be, but I'll keep a lookout thanks! Quote ;D For the prologue, I've got #JMP($1CAC2, $1CC22) and #HDR($14010) in both my translation's actual insert script and abcde's example, and both of those commands are correct. The value of the pointer at $1CAB1 is$8AB2, and $8AB2 +$14010 = $1CAC2. I also get$8AB2 as the pointer value when running abcde's example insert script, so I think something else is going on here. Any chance you could send me a link to your insert script so I can take a look? I notice your quote has a bunch of trailing spaces on the second line ("hero         "); my guess is that those are probably what's causing the issues on the third line. It looks like the prologue display code might only handle 30 bytes per line, so 32 bytes could cause some weirdness - what happens if you take those extra spaces out? Okay, yeah I did have some extra spaces there and it worked fine when I removed them (I think I was going to add those spaces for trial and error, then decided against it and forgot. Typicial). I noticed I was still getting some freezing, and noticed it was just spacer.tbl messing things up. Some lines it would put a space ($5F) instead of$FF at the end of a line and the game NEEDS that byte for the number of spaces ($01-$0F) and as said above it seems Atlas/abcde seemed to misinterpret some of those preceding spaces as regular ($5F) spaces. Also, worth noting, when you get to around 28 characters in a line (not including the spaces) things will flicker a bit. Also, here's my atlas.txt http://www.mediafire.com/file/o0hmhps1m4ss5oa/Atlas%25282%2529.txt/file I'm really curious what's causing that Prologue pointer to be changed. Title: Re: Dragon Warrior 1, 2 & 3 Hacking Discussion Post by: abw on January 11, 2020, 09:59:50 pm Nice! That'll be a huge help! Yeah, it's slow progress in between other things, but at least it is progress! I noticed I was still getting some freezing, and noticed it was just spacer.tbl messing things up. The freezing appears to be due to trying to insert lines longer than 30 bytes - it seems like the game really doesn't like that. I haven't spent much time looking at the prologue code, so I'm not sure what its actual limitations are. Some lines it would put a space ($5F) instead of $FF at the end of a line and the game NEEDS that byte for the number of spaces ($01-$0F) and as said above it seems Atlas/abcde seemed to misinterpret some of those preceding spaces as regular ($5F) spaces. I wasn't able to reproduce this myself, but I see that our table files have diverged a fair bit, so that isn't terribly conclusive evidence, especially if you were using the menu table when inserting the prologue. Also, worth noting, when you get to around 28 characters in a line (not including the spaces) things will flicker a bit. Yeah, the original game keeps its line lengths down to 28 bytes including spaces, 24 excluding. My translation seemed okay at 29 bytes including spaces, 26 excluding. I was getting some flickering with your "One hundred years passed..." line; you might want to try splitting/shortening that one. I'm really curious what's causing that Prologue pointer to be changed. You're missing the #HDR($14010) command to adjust the pointer calculations for ROM bank 7, which means the #HDR($-3FF0) is still active; $1CAC2 -$-3FF0 = $20AB2, which has$0AB2 as its low 2 bytes.
2020-02-17 07:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4412507712841034, "perplexity": 3824.9254637714103}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00111.warc.gz"}
https://www.ask-math.com/math-puzzle-3.html
# Math Puzzle 3 Covid-19 has led the world to go through a phenomenal transition . E-learning is the future today. Stay Home , Stay Safe and keep learning!!! This math puzzle 3 is on LCM,Wages,numbers. 1) There are 40 coins in a bag, consisting of $5 and$ 2 coins. If the total amount is Rs. 140, how many $2 and$ 5 coins are there in the bag? (1) 20 each    (2) 25 and 15    (3) 30 and 10    (4) 22 and 18 2) Four years ago, the average age of A and B was 18 years. At present the average age of A, Band C is 24 years. What would be the age of C after 8 years? (1) 30 years    (2) 36 years   (3) 28 years    (4) 25 years 3) The marks obtained by 10 students in Science (out of 50) are 30, 41, 40, 41, 30, 41, 30, 28, 41, 40. The modal mark is: (1) 40   (2) 30    (3) 41   (4) 35 4) 6, 18, 24, 9, 27, 33, 11?, ? (1) 15, 19   (2) 22, 27   (3) 33, 39   (4) 44, 47 5) The area of the base of a right cone is 154 m2 and its volume is 308 m3. The height of the cone is: (1) 8 m    (2) 6 m    (3) 7 m   (4) 9 m 6) The L.C.M. of two numbers is 12 times their H.C.F.. The sum of the H.C.F. and L.C.M. is 403. If one number is 93, then the other number is: (1)134   (2) 124    (3) 128    (4) 310 7) The wages of 10 workers for a six-day week is Rs. 1200. What are the one day’s wages of 4 workers? (1) $40 (2)$ 32   (3)$80 (4)$ 24 8) All natural numbers and 0 are called the _______ numbers. (1) whole    (2) prime   (3) integer   (4) rational 9) X gives ½ of his property to his wife and ½ of the rest of his son. The remainder is divided equally to his two daughters. The share of each daughter is: (1) 1/8    (2) 1/6    (3) ¼    (4) 2/3 Math puzzle 3 From Math puzzle 3 to Math Teasers Home Covid-19 has affected physical interactions between people. Don't let it affect your learning.
2022-01-29 08:24:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5126091241836548, "perplexity": 3078.137612534461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00009.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-zeros-of-y-6x-2-5x-2-using-the-quadratic-formula
# How do you find the zeros of y = -6x^2 + 5x -2 using the quadratic formula? Mar 28, 2016 here is a short video to demonstrate how to do this. You will need to pick out the necessary components from your function for the substitution. In your case, a = -6 b = 5 and c = -2 Substitute these values into the quadratic formula and you will obtain the roots (zeros) of the equation. From a graphical perspective, this will be the location of your X intercepts - where the graph will cross the X-axis
2020-09-23 02:06:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381143927574158, "perplexity": 210.25426582338224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00437.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2014.13.2095
# American Institute of Mathematical Sciences • Previous Article On the nodal set of the eigenfunctions of the Laplace-Beltrami operator for bounded surfaces in $R^3$: A computational approach • CPAA Home • This Issue • Next Article Attractors for the nonlinear elliptic boundary value problems and their parabolic singular limit September  2014, 13(5): 2095-2113. doi: 10.3934/cpaa.2014.13.2095 ## Stability of delay evolution equations with stochastic perturbations 1 Dpto. Ecuaciones Diferenciales y Análisis Numérico, Facultad de Matemáticas, Universidad de Sevilla, Campus Reina Mercedes, Apdo. de Correos 1160, 41080 Sevilla 2 Department of Higher Mathematics, Donetsk State University of Management, Chelyuskintsev str., 163-a, Donetsk, 83015 Received  December 2012 Revised  February 2013 Published  June 2014 The investigation of stability for hereditary systems is often related to the construction of Lyapunov functionals. The general method of Lyapunov functionals construction, which was proposed by V.Kolmanovskii and L.Shaikhet, is used here to investigate the stability of stochastic delay evolution equations, in particular, for stochastic partial differential equations. This method had already been successfully used for functional-differential equations, for difference equations with discrete time, and for difference equations with continuous time. It is shown that the stability conditions obtained for stochastic 2D Navier-Stokes model with delays are essentially better than the known ones. Citation: Tomás Caraballo, Leonid Shaikhet. Stability of delay evolution equations with stochastic perturbations. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2095-2113. doi: 10.3934/cpaa.2014.13.2095 ##### References: show all references ##### References: [1] Hakima Bessaih, Benedetta Ferrario. Statistical properties of stochastic 2D Navier-Stokes equations from linear models. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 2927-2947. doi: 10.3934/dcdsb.2016080 [2] Yuri Bakhtin. Lyapunov exponents for stochastic differential equations with infinite memory and application to stochastic Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 697-709. doi: 10.3934/dcdsb.2006.6.697 [3] Tomás Caraballo, José Real, T. Taniguchi. The exponential stability of neutral stochastic delay partial differential equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 295-313. doi: 10.3934/dcds.2007.18.295 [4] Julia García-Luengo, Pedro Marín-Rubio, José Real. Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1603-1621. doi: 10.3934/cpaa.2015.14.1603 [5] Hongyong Cui, Mirelson M. Freitas, José A. Langa. Squeezing and finite dimensionality of cocycle attractors for 2D stochastic Navier-Stokes equation with non-autonomous forcing. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1297-1324. doi: 10.3934/dcdsb.2018152 [6] Lihuai Du, Ting Zhang. Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4745-4765. doi: 10.3934/dcds.2018209 [7] Takeshi Taniguchi. The existence and decay estimates of the solutions to $3$D stochastic Navier-Stokes equations with additive noise in an exterior domain. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4323-4341. doi: 10.3934/dcds.2014.34.4323 [8] G. Deugoué, T. Tachim Medjo. The Stochastic 3D globally modified Navier-Stokes equations: Existence, uniqueness and asymptotic behavior. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2593-2621. doi: 10.3934/cpaa.2018123 [9] Kerem Uǧurlu. Continuity of cost functional and optimal feedback controls for the stochastic Navier Stokes equation in 2D. Communications on Pure & Applied Analysis, 2017, 16 (1) : 189-208. doi: 10.3934/cpaa.2017009 [10] Ana Bela Cruzeiro. Navier-Stokes and stochastic Navier-Stokes equations via Lagrange multipliers. Journal of Geometric Mechanics, 2019, 11 (4) : 553-560. doi: 10.3934/jgm.2019027 [11] Yutaka Tsuzuki. Solvability of $p$-Laplacian parabolic logistic equations with constraints coupled with Navier-Stokes equations in 2D domains. Evolution Equations & Control Theory, 2014, 3 (1) : 191-206. doi: 10.3934/eect.2014.3.191 [12] J. Huang, Marius Paicu. Decay estimates of global solution to 2D incompressible Navier-Stokes equations with variable viscosity. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4647-4669. doi: 10.3934/dcds.2014.34.4647 [13] Songsong Lu, Hongqing Wu, Chengkui Zhong. Attractors for nonautonomous 2d Navier-Stokes equations with normal external forces. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 701-719. doi: 10.3934/dcds.2005.13.701 [14] Ruihong Ji, Yongfu Wang. Mass concentration phenomenon to the 2D Cauchy problem of the compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1117-1133. doi: 10.3934/dcds.2019047 [15] Min Zhu, Panpan Ren, Junping Li. Exponential stability of solutions for retarded stochastic differential equations without dissipativity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2923-2938. doi: 10.3934/dcdsb.2017157 [16] Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515 [17] Kai Liu. Stationary solutions of neutral stochastic partial differential equations with delays in the highest-order derivatives. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3915-3934. doi: 10.3934/dcdsb.2018117 [18] Henri Schurz. Stochastic heat equations with cubic nonlinearity and additive space-time noise in 2D. Conference Publications, 2013, 2013 (special) : 673-684. doi: 10.3934/proc.2013.2013.673 [19] Julia García-Luengo, Pedro Marín-Rubio, José Real. Regularity of pullback attractors and attraction in $H^1$ in arbitrarily large finite intervals for 2D Navier-Stokes equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 181-201. doi: 10.3934/dcds.2014.34.181 [20] Huaiqiang Yu, Bin Liu. Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control & Related Fields, 2012, 2 (1) : 61-80. doi: 10.3934/mcrf.2012.2.61 2018 Impact Factor: 0.925
2019-12-11 07:48:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4720272123813629, "perplexity": 3054.4252646607806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00299.warc.gz"}
https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/4/lesson/4.2.2/problem/4-84
### Home > AC > Chapter 4 > Lesson 4.2.2 > Problem4-84 4-84. In Spring, the daily high temperature in Boulder, Colorado rises about degree per day. On Friday, May 2, the temperature reached 74°. Predict when the temperature will reach 90°. Homework Help ✎ Using the equation y = mx + b, put the word problem into equation form. What is the growth rate? What is the starting point? What is the desired temperature? Now, solve for x. $\frac{1}{3}x$ $\frac{1}{3}x + 74$ $90=\frac{1}{3}x + 74$ 48 days later (Thursday, June 19).
2020-02-22 20:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4119354784488678, "perplexity": 5308.6098193760345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00250.warc.gz"}
http://nrich.maths.org/public/leg.php?code=71&cl=4&cldcmpid=5431
Search by Topic Resources tagged with Mathematical reasoning & proof similar to Number Chains: Filter by: Content type: Stage: Challenge level: There are 185 results Broad Topics > Using, Applying and Reasoning about Mathematics > Mathematical reasoning & proof Dalmatians Stage: 4 and 5 Challenge Level: Investigate the sequences obtained by starting with any positive 2 digit number (10a+b) and repeatedly using the rule 10a+b maps to 10b-a to get the next number in the sequence. A Computer Program to Find Magic Squares Stage: 5 This follows up the 'magic Squares for Special Occasions' article which tells you you to create a 4by4 magicsquare with a special date on the top line using no negative numbers and no repeats. Pareq Exists Stage: 4 Challenge Level: Prove that, given any three parallel lines, an equilateral triangle always exists with one vertex on each of the three lines. Unit Interval Stage: 4 and 5 Challenge Level: Take any two numbers between 0 and 1. Prove that the sum of the numbers is always less than one plus their product? Stage: 4 Challenge Level: Four jewellers possessing respectively eight rubies, ten saphires, a hundred pearls and five diamonds, presented, each from his own stock, one apiece to the rest in token of regard; and they. . . . Mouhefanggai Stage: 4 Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai. Euclid's Algorithm II Stage: 5 We continue the discussion given in Euclid's Algorithm I, and here we shall discover when an equation of the form ax+by=c has no solutions, and when it has infinitely many solutions. Impossible Sandwiches Stage: 3, 4 and 5 In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot. Proof: A Brief Historical Survey Stage: 4 and 5 If you think that mathematical proof is really clearcut and universal then you should read this article. Janine's Conjecture Stage: 4 Challenge Level: Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . . The Great Weights Puzzle Stage: 4 Challenge Level: You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest? Air Nets Stage: 2, 3, 4 and 5 Challenge Level: Can you visualise whether these nets fold up into 3D shapes? Watch the videos each time to see if you were correct. Sperner's Lemma Stage: 5 An article about the strategy for playing The Triangle Game which appears on the NRICH site. It contains a simple lemma about labelling a grid of equilateral triangles within a triangular frame. To Prove or Not to Prove Stage: 4 and 5 A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples. Whole Number Dynamics III Stage: 4 and 5 In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. Thousand Words Stage: 5 Challenge Level: Here the diagram says it all. Can you find the diagram? Big, Bigger, Biggest Stage: 5 Challenge Level: Which is the biggest and which the smallest of $2000^{2002}, 2001^{2001} \text{and } 2002^{2000}$? Pair Squares Stage: 5 Challenge Level: The sum of any two of the numbers 2, 34 and 47 is a perfect square. Choose three square numbers and find sets of three integers with this property. Generalise to four integers. Whole Number Dynamics IV Stage: 4 and 5 Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens? Whole Number Dynamics V Stage: 4 and 5 The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values. Where Do We Get Our Feet Wet? Stage: 5 Professor Korner has generously supported school mathematics for more than 30 years and has been a good friend to NRICH since it started. Proof of Pick's Theorem Stage: 5 Challenge Level: Follow the hints and prove Pick's Theorem. Whole Number Dynamics II Stage: 4 and 5 This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. Whole Number Dynamics I Stage: 4 and 5 The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases. Try to Win Stage: 5 Solve this famous unsolved problem and win a prize. Take a positive integer N. If even, divide by 2; if odd, multiply by 3 and add 1. Iterate. Prove that the sequence always goes to 4,2,1,4,2,1... Yih or Luk Tsut K'i or Three Men's Morris Stage: 3, 4 and 5 Challenge Level: Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . Modulus Arithmetic and a Solution to Differences Stage: 5 Peter Zimmerman, a Year 13 student at Mill Hill County High School in Barnet, London wrote this account of modulus arithmetic. Logic, Truth Tables and Switching Circuits Challenge Stage: 3, 4 and 5 Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . Stage: 5 Challenge Level: Find all positive integers a and b for which the two equations: x^2-ax+b = 0 and x^2-bx+a = 0 both have positive integer solutions. Recent Developments on S.P. Numbers Stage: 5 Take a number, add its digits then multiply the digits together, then multiply these two results. If you get the same number it is an SP number. Picturing Pythagorean Triples Stage: 4 and 5 This article discusses how every Pythagorean triple (a, b, c) can be illustrated by a square and an L shape within another square. You are invited to find some triples for yourself. Magic Squares II Stage: 4 and 5 An article which gives an account of some properties of magic squares. Iffy Logic Stage: 4 Short Challenge Level: Can you rearrange the cards to make a series of correct mathematical statements? Water Pistols Stage: 5 Challenge Level: With n people anywhere in a field each shoots a water pistol at the nearest person. In general who gets wet? What difference does it make if n is odd or even? An Introduction to Number Theory Stage: 5 An introduction to some beautiful results of Number Theory Notty Logic Stage: 5 Challenge Level: Have a go at being mathematically negative, by negating these statements. Diverging Stage: 5 Challenge Level: Show that for natural numbers x and y if x/y > 1 then x/y>(x+1)/(y+1}>1. Hence prove that the product for i=1 to n of [(2i)/(2i-1)] tends to infinity as n tends to infinity. Areas and Ratios Stage: 4 Challenge Level: What is the area of the quadrilateral APOQ? Working on the building blocks will give you some insights that may help you to work it out. L-triominoes Stage: 4 Challenge Level: L triominoes can fit together to make larger versions of themselves. Is every size possible to make in this way? Converse Stage: 4 Challenge Level: Clearly if a, b and c are the lengths of the sides of a triangle and the triangle is equilateral then a^2 + b^2 + c^2 = ab + bc + ca. Is the converse true, and if so can you prove it? That is if. . . . Find the Fake Stage: 4 Challenge Level: There are 12 identical looking coins, one of which is a fake. The counterfeit coin is of a different weight to the rest. What is the minimum number of weighings needed to locate the fake coin? Tetra Inequalities Stage: 5 Challenge Level: Prove that in every tetrahedron there is a vertex such that the three edges meeting there have lengths which could be the sides of a triangle. Proof Sorter - Sum of an AP Stage: 5 Challenge Level: Use this interactivity to sort out the steps of the proof of the formula for the sum of an arithmetic series. The 'thermometer' will tell you how you are doing The Clue Is in the Question Stage: 5 Challenge Level: This problem is a sequence of linked mini-challenges leading up to the proof of a difficult final challenge, encouraging you to think mathematically. Starting with one of the mini-challenges, how. . . . Stage: 5 Short Challenge Level: Sort these mathematical propositions into a series of 8 correct statements. Tree Graphs Stage: 4 Challenge Level: A connected graph is a graph in which we can get from any vertex to any other by travelling along the edges. A tree is a connected graph with no closed circuits (or loops. Prove that every tree. . . . Stage: 4 and 5 Challenge Level: Which of these roads will satisfy a Munchkin builder? A Long Time at the Till Stage: 4 and 5 Challenge Level: Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? Zig Zag Stage: 4 Challenge Level: Four identical right angled triangles are drawn on the sides of a square. Two face out, two face in. Why do the four vertices marked with dots lie on one line?
2014-12-21 02:39:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34684327244758606, "perplexity": 1245.3051953241134}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770616.6/warc/CC-MAIN-20141217075250-00148-ip-10-231-17-201.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/68391/understanding-index-transformation-in-derivation-of-fourier-transform-for-sampli/68394#68394
# Understanding index transformation in derivation of Fourier transform for sampling rate reduction Was going over some notes regarding deriving fourier transform equation for Sampling Rate Reduction. Reference to Notes from below link https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-341-discrete-time-signal-processing-fall-2005/lecture-notes/lec05.pdf or from Book Discrete-Time Signal Processing by Alan V. Oppenheim (2nd Edition), equation 4.75. $$r = i + kM$$ I am lost as to how this is obtained. I understand that every $$M-1$$ samples are dropped from original sampling results; but still cannot understand how this expression for $$r$$ is derived. Could someone help me understand this? It's not derived, it's just chosen in a smart way such that the relationship between the decimated and the original sequences becomes obvious. It's just a rearrangement of the terms of the sum. As a simple example, take an infinite sum of numbers $$a_r$$: $$S=\sum_{r=-\infty}^{\infty}a_r\tag{1}$$ Under certain conditions that we don't need to bother with now we can rearrange the sum and write it like \begin{align}S&=\ldots+a_{-2M}+a_{-M}+a_0+a_M+a_{2M}+\ldots\\&\ldots+a_{-2M+1}+a_{-M+1}+a_1+a_{M+1}+a_{2M+1}+\ldots\\&\vdots\\&\ldots+a_{-2M+(M-1)}+a_{-M+(M-1)}+a_{0+(M-1)}+a_{M+(M-1)}+a_{2M+(M-1)}+\ldots\tag{2}\end{align} with some arbitrarily chosen integer $$M$$. So we just start with element $$a_0$$ and add every $$M^{th}$$ element, then we move to element $$a_1$$ and add again every $$M^{th}$$ element, etc. If we do this $$M$$ times, we've added all elements, just like in the originial sum $$(1)$$. Eq. $$(2)$$ can be written as $$S=\sum_{i=0}^{M-1}\sum_{k=-\infty}^{\infty}a_{i+kM}\tag{3}$$ which means that we expressed the index $$r$$ as $$r=i+kM$$. • thanks for info. What troubles me is that the down sampling samples only contain a(M) + a(2M) + a(3M) ... so in context of summation i would write it as $S=\sum_{k=-\infty}^{\infty}a_{kM}\tag{1}$ Jun 17 '20 at 18:54 • @niil87: But that sum is not over time domain samples, this is a sum of shifted frequency spectra, there's no downsampling involved in that sum. Jun 17 '20 at 19:14 • thank you for reply, makes sense! Jun 19 '20 at 9:02
2021-11-28 09:17:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041552543640137, "perplexity": 488.97893256524725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00564.warc.gz"}
http://neda.psdeg-psoe.org/nwhmtw8b/e16010-find-the-subgame-perfect-equilibrium-of-the-game
Example 1: (OUT&B, L) is a subgame perfect Nash equilibrium 16 0 obj 66 0 obj << /Subtype/Link/A<> endobj endobj Subgame Perfect Nash Equilibrium: a pro le of strategies s = (s1;s2;:::;sn) is a subgame perfect Nash equilibrium if a Nash equilibrium is played in every subgame. Existence of a subgame perfect Nash-equilibrium Given is the following game The game is repeated finitely many times and the total payoff is the sum of the payoff from each repetition. Is there one more subgame perfect equilibrium? 10,3 2,-1 2,3 4,7 0,10-3,2 3,-6,-2 Question 2: Cheryl and Derrick are trying to go out on their date. Luttmer and Thomas Mariotti Harris (1985) has shown that subgame-perfect equilibria exist in deterministic con-tinuous games with perfect information.1 A recent influential paper by Harris, Reny 58 0 obj << 50 0 obj << b. >> endobj What is the altitude of a surface-synchronous orbit around the Moon? endobj /Rect [257.302 9.631 264.275 19.095] A subgame is part of a game that can be considered as a game itself. (1) subgame perfect equilibrium and (2) one Nash equilibrium that is not the subgame perfect equilibrium. /Type /Annot /Rect [267.264 9.631 274.238 19.095] /A << /S /GoTo /D (Navigation1) >> /A << /S /GoTo /D (Navigation1) >> 编辑于 2016-10-12. Why do exploration spacecraft like Voyager 1 and 2 go through the asteroid belt, and not over or below it? /Annots [ 48 0 R 49 0 R 50 0 R 51 0 R 52 0 R 53 0 R 54 0 R 55 0 R 56 0 R 57 0 R 58 0 R 59 0 R 60 0 R 61 0 R 62 0 R 63 0 R 64 0 R 65 0 R 66 0 R 67 0 R 68 0 R ] (Backward Induction) /Rect [230.631 9.631 238.601 19.095] /Type /Annot /Parent 77 0 R /A << /S /GoTo /D (Navigation1) >> What are the features of the "old man" that was crucified with Christ and buried? /Subtype /Link 69 0 obj << >> endobj Each game is a subgame of itself. Subgame Perfect Equilibrium In practice you may use an algorithm similar to backward induction: 1 Find the Nash equilibria of the “smallest” subgame(s) 2 Fix one for each subgame and attach payoffs to its initial node 3 Repeat with the reduced game Levent Koc¸kesen (Koc¸ University) Extensive Form Games … MathJax reference. /A << /S /GoTo /D (Navigation2) >> endobj /Rect [300.681 9.631 307.654 19.095] 105 0 obj << For large K, isn’t it more reasonable to think that the /Rect [244.578 9.631 252.549 19.095] 24 0 obj /Type /Annot Consider the following game: player 1 has to decide between going up or down (U/D), while player 2 has to decide between going left or right (L/R). /Subtype/Link/A<> Every path of the game in which the outcome in any period is either outor (in,C) is a Nash equilibrium outcome. Thus the only subgame perfect equilibria of the entire game is $${AD,X}$$. /Rect [310.643 9.631 317.617 19.095] The game does not have such subgame perfect equilibria from the same reason that a pair of grim strategies is never subgame perfect. A subgame is a part of a game that happens after a certain sequence of starting moves have been played. /A << /S /GoTo /D (Navigation1) >> Given that you can solve the one-shot game, perhaps you can provide some context by writing down, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…. endobj We analyze three games using our new solution concept, subgame perfect equilibrium (SPE). (Extensions) /Subtype /Link /Subtype /Link /Border[0 0 0]/H/N/C[.5 .5 .5] – As a result, every subgame perfect equilibrium is a Nash equlibrium, but not the other way around. A subgame of a extensive game is the game starting from some node x; where one or more players move simultaneously. /Rect [236.608 9.631 246.571 19.095] A subgame perfect Nash equilibrium is an equilibrium such that players' strategies constitute a Nash equilibrium in every subgame of the original game. >> endobj 67 0 obj << Was Stan Lee in the second diner scene in the movie Superman 2? stream /Filter /FlateDecode x��XKo7��W�qD�o��h")�${+;�j���!Er�p,Yu��r9;�o8C��A›��E���kN�oFw�'A;%������p5z����Q(�?�M�����"��W�c\�#��x�2eYAiNy@F�_����{tI��o� ��2���K-t�Z�"&���0��{� 53 0 obj << And so, so we see that in fact that captures the intuition of non credible threat and notice also that one special case of the sub tree is the entire tree So subgame perfect equilibirium has got to also be Nash equilibrium. << /S /GoTo /D (Outline0.2) >> /A << /S /GoTo /D (Navigation2) >> In games with perfect information, the Nash equilibrium obtained through backwards induction is subgame perfect. /Type /Annot Actually, I can solve the problem if the game is done only one time, however, I cannot know how to solve when the game plays two times. >> endobj /Type /Annot << /S /GoTo /D (Outline0.2.3.17) >> /ColorSpace 3 0 R /Pattern 2 0 R /ExtGState 1 0 R 29 0 obj /Subtype /Link Look at the following game. /Type /Annot endobj To learn more, see our tips on writing great answers. endobj /Subtype /Link 12 0 obj ްx.m�LN S\y����PfltJ�. >> endobj In this video I go over the very basics of backwards induction as well as the calculation of subgame perfect equilibria. A subgame . endstream /A << /S /GoTo /D (Navigation1) >> Answer to 7 Using backward induction, find the subgame perfect equilibrium (equilibria) of the following game. >> endobj Did Biden underperform the polls because some voters changed their minds after being polled? 41 0 obj • Subgame Perfect Equilibrium requires that players play a Nash Equlibrium in every subgame of the game. /Border[0 0 0]/H/N/C[1 0 0] 57 0 obj << /Contents 70 0 R >> endobj 21 0 obj /Rect [262.283 9.631 269.257 19.095] I With perfect information, a subgame perfect equilibrium is a sequential equilibrium. 20 0 obj Subgame The subgame of the extensive game with perfect information (N;H;P;(V i)) that follows h 2H=Z is the extensive game (N;Hj h;Pj h;(V ij /Type /Annot /A << /S /GoTo /D (Navigation1) >> (Interpretations of Strategies) /Rect [317.389 9.631 328.348 19.095] /Rect [278.991 9.631 285.965 19.095] >> endobj Making statements based on opinion; back them up with references or personal experience. • It . Some comments: Hopefully it is clear that subgame perfect Nash equilibrium is a refinement of Nash equilibrium. 32 0 obj /Type /Annot /Filter /FlateDecode It may be found by backward induction, an iterative process for solving finite extensive form or sequential games.First, one determines the optimal strategy of the player who makes the last move of the game. /Rect [174.721 1.66 188.108 7.804] must have a unique starting point; • It . 49 0 obj << 5 A subgame-perfect equilibrium is an equilibrium not only overall, but also for each subgame, while Nash equilibria can be calculated for each subgame. >> endobj /Subtype /Link What are the strategies in a subgame perfect nash-equilibrium? ��FM�+@'��&�!Qp X���ٯ�A��8+t��t̜�^S�R�}xy��@$C#R8���Z��ȯ���U�J��,'Sv2�� endobj Recap Perfect-Information Extensive-Form Games Subgame Perfection Example: the sharing game q q q q q q q q q q 1 2 2 2 2–0 1–1 0–2 no yes no yes no yes (0,0) (2,0) (0,0) (1,1) (0,0) (0,2) Play as a fun game, dividing 100 dollar coins. endobj The first game involves players’ trusting that others will not make mistakes. endobj In this case,one of the Nash equilibriums is not subgame-perfect equilibrium. endobj >> endobj 25 0 obj (Subgame Perfect Equilibrium) Figure 11.4: Subgame-perfect Nash equilibrium The above example illustrates a technique to compute the … As the game has only one subgame (i.e., the game itself) then the Nash Equilibria will coincide with the subgame perfect equilibria. the traditional concept of a subgame perfect equilibrium should be adapted. Nash equilibrium that is not subgame perfect in an infinitely repeated game? endobj What are the Nash equilibria of each stage-game? /Type /Annot %PDF-1.4 - Subgame Perfect Equilibrium: Matchmaking and Strategic Investments Overview. /Type /Annot Title: Game Theory 2: Extensive-Form Games and Subgame Perfection Created Date: 76 0 obj << site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Economics 546: Game Theory Problem Set 2 Solutions 1. /Border[0 0 0]/H/N/C[1 0 0] 60 0 obj << 59 0 obj << endobj 45 0 obj Find a subgame-perfect equilibrium for the two-stage game in which the players choose (P, p) in the first stage-game. /Border[0 0 0]/H/N/C[.5 .5 .5] 13 0 obj ���ؚ�GBf�(#����}�䆓�+���;���_$����h!��ka�uE��W�L����kQ:���)�H|���M����Lg/U�O��)?�g]|�l�3����l˺����_%��9����(Ƀe#i��d���.8�(8�k��ޕ)�QT�y��W I A sequential equilibrium is a Nash equilibrium. tinue the game, thereby sacrificing one dollar so that the other player can receive more than one dollar. It has three Nash equilibria but only one is consistent with backward induction. (Further Examples) }z��ui3H{0�#]�y�s�g�R�b�J�L���'i%O#nsT�[^���N~�}�8=�3Tꠀ$k؏��nz+|ڣ*x�wg[k���(Cg���������T�r�s^PTwZR����ug����uG��c���z�!nazz+&t���� endobj >> endobj ... • In games with perfect information and finite actions, endobj Subgame Perfect Nash equilibrium (Mixed strategy), Finding Mixed-Strategy Subgame-Perfect Equilibrium. 75 0 obj << In a High-Magic Setting, Why Are Wars Still Fought With Mostly Non-Magical Troop? x��WKo1��W��������x�!A�pa[��jB�{f쵽��4�B����x��xl�>0�NFb8�X� [}���dt�|�)+�W�I'9H�V����tSԾ#�,����N�w%p��R-�?�'�k�)�%��I�Jǀ��.GWl��ζ�D� /Border[0 0 0]/H/N/C[1 0 0] /Trans << /S /R >> >> endobj Is there a difference between Cmaj♭7 and Cdominant7 chords? 5 19. endstream /Type /Annot /Subtype /Link But, we can modify the limited punishment strategy in the same way that we modified the grim strategy to obtain subgame perfect equilibrium for δ sufficiently high. /Border[0 0 0]/H/N/C[.5 .5 .5] >> Use MathJax to format equations. I know that in order to find a SPNE (Subgame Perfect Nash Equilibrium), we can use backward induction procedure and I am familiar with this procedure. /Type /Annot In this case, we have two Nash equilibria: {U, u} and {D, d}. Determining the subgame perfect equilibrium by using backward induction is shown below in Figure 1. /Subtype /Link /Rect [252.32 9.631 259.294 19.095] A strategy profile σ is a δ-approximate sub- Find a Subgame Perfect Nash equilibrium of the game featuring one player using a mixed strategy. 54 0 obj << 44 0 obj /Border[0 0 0]/H/N/C[.5 .5 .5] It is called a subgame after the history. 36 0 obj Thus the only subgame perfect equilibria of the entire game is $${AD,X}$$. I want to know if my thinking is correct. 46 0 obj << If this game is repeated two times (t=1, 2), then find /Rect [283.972 9.631 290.946 19.095] /Rect [346.052 9.631 354.022 19.095] Consider the following game: player 1 has to decide between going up or down (U/D), while player 2 has to decide between going left or right (L/R). • It . The twice-repeated game has more than one SPE. << /S /GoTo /D (Outline0.3) >> >> endobj Subgame Perfect Nash Equilibrium is a re nement of Nash Equilibrium It rules out equilibria that rely on incredible threats in a dynamic environment All SPNE are identi ed by backward induction 26/26. >> endobj 3 One can, Each game is a subgame of itself. 61 0 obj << 33 0 obj /R 22050 /Length 1030 ��� g�[hE��BL{��T"�qE�����R(�D�il���ؓl�Ý��*�������,��&�=C�]�Zo�M�KSLvѧx����O�.�-$���(��Tۭ�d"G��QU.2���\-O8�sgM���!ez�]�Ӊ6��,Zڧsv�P�Na�ԫ�!��!K랉�Q��2=�g&Z�Ć�:A�Y�j;��������s�4Fh�̯ :ax{�a�|�f�����x���ލ����E�W&������\2yus����q��8�g�"��XG)���M�l������Oҩu����X�nu�HW�t�#eT�V�DQK�k]~�����h�;�!i#,��$}ζ9��1v��욒����6�w5����a@ŧ. /Rect [295.699 9.631 302.673 19.095] - Subgame Perfect Equilibrium: Matchmaking and Strategic Investments Overview. Asking for help, clarification, or responding to other answers. /Rect [339.078 9.631 348.045 19.095] By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Video created by Stanford University, The University of British Columbia for the course "Game Theory". << /S /GoTo /D (Outline0.2.2.10) >> /Subtype /Link >> endobj Find a Subgame Perfect Nash equilibrium of the game featuring one player using a mixed strategy. In games with perfect information, the Nash equilibrium obtained through backwards induction is subgame perfect. 62 0 obj << Extensive Games Subgame Perfect Equilibrium Backward Induction Illustrations Extensions and Controversies Concepts • Some concepts: The empty history (∅): the start of the game A terminal history: a sequence of actions that specifies what may happen in the game from the start of the game to an action that ends the game. /Rect [305.662 9.631 312.636 19.095] In "Pride and Prejudice", what does Darcy mean by "Whatever bears affinity to cunning is despicable"? /Filter /FlateDecode /Rect [288.954 9.631 295.928 19.095] First, Player 1 chooses and then they play ( ) simultaneously. endobj >> endobj If you model the game as a tree where each link is a possible move, every subtree corresponds to a subgame. 63 0 obj << 55 0 obj << endobj /Border[0 0 0]/H/N/C[.5 .5 .5] A subgame is part of a game that can be considered as a game itself. endobj Hence, there is only one Subgame Perfect Equilibrium in this game: (In,Accomodate) Among the two psNE we found, i.e., (In,Accomodate) and (Out,Fight), only the –rst equilibrium is sequentially rational. The part of the game tree consisting of all nodes that can be reached from x is called a subgame. 64 0 obj << /Type /Annot /Font << /F18 72 0 R /F16 73 0 R /F19 74 0 R >> /Border[0 0 0]/H/N/C[.5 .5 .5] /Type /Annot Finding subgame-perfect Nash equilibrium in the Trust game. >> endobj 1 B X L R T E 1 (2,6) (0,1) (3,2) (-1,3) (1,5) 2 L R . Are you ok with just one (as the singular suggests) or are you looking for the whole set? increasinglyfineapproximations,andasubgame—perfectequilibriumofeachofthe approximations,then itis natural to expectthat any limit point of thesequence of equilibriumpaths so obtained will be an equilibrium path of the original game. 71 0 obj << /Type /Annot /Type /Annot (Examples) /Border[0 0 0]/H/N/C[.5 .5 .5] /A << /S /GoTo /D (Navigation29) >> /Length 1039 /A << /S /GoTo /D (Navigation1) >> /Rect [274.01 9.631 280.984 19.095] What is the difference between subgame perfect Nash-equilibrium and backwards induction? /Border[0 0 0]/H/N/C[.5 .5 .5] >> endobj /Type /Annot Extensive Form Games and Subgame Perfection ISCI 330 Lecture 12, Slide 3 There is a unique subgame perfect equilibrium,where each competitor chooses inand the chain store always chooses C. For K=1, subgame perfection eliminates the bad NE. If this game is repeated two times (t=1, 2), then find (1) subgame perfect equilibrium and (2) one Nash equilibrium that is not the subgame perfect equilibrium. /Border[0 0 0]/H/N/C[.5 .5 .5] << /S /GoTo /D (Outline0.2.1.6) >> stream I know that in order to find a SPNE (Subgame Perfect Nash Equilibrium), we can use backward induction procedure and I am familiar with this procedure. >> << /pgfprgb [/Pattern /DeviceRGB] >> 28 0 obj There is a unique subgame perfect equilibrium, where each player stops the game after every history. endobj In this paper we define a variant of the concept of subgame perfect equi-librium, a δ-approximate subgame perfect -equilibrium, which is ap-propriate to stopping games. /Rect [352.03 9.631 360.996 19.095] must have a unique starting point; • It . Subgame Perfect Equilibrium In the previous unit, we examined simple games where both players chose their strategies simultaneously. endobj The part of the game tree consisting of all nodes that can be reached from x is called a subgame. 9 0 obj To subscribe to this RSS feed, copy and paste this URL into your RSS reader. >> endobj endobj >> endobj /Subtype /Link >> endobj rev 2020.12.8.38142, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. /Subtype /Link << /S /GoTo /D [46 0 R /Fit ] >> However, in many strategic contexts, players observe their opponents’ moves before making their own. View PS2Soln.pdf from ECONOMICS 546 at McGill University. ��d�s�"����ǖL�1���0E�� So far Up to this point, we have assumed that players know all 48 0 obj << /Subtype /Link In this case, although player B never has to select between "t" and "b," the fact that the player would select "t" is what makes playing "S" an equilibrium for player A. Be precise in defining history-contingent strategies for both players. endobj /MediaBox [0 0 362.835 272.126] Question 1: Find all subgame perfect equilibria of the following games. /A << /S /GoTo /D (Navigation2) >> /Type /Annot >> endobj /A << /S /GoTo /D (Navigation2) >> By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. /A << /S /GoTo /D (Navigation1) >> There are several Nash equilibria, but all of them involve both players stopping the game … (Play each partner only once.) /Resources 69 0 R 40 0 obj Find all the pure- strategy subgame-perfect equilibria with extreme discounting (8 = 0). To rule out equilibria based on empty threats we need a stronger equilibrium concept for sequential games: subgame-perfect equilibrium. 3 0 obj /Type /Page By my statement before, the subgame perfect equilibria will be {U, u} and {D, d} too. Explicitly write down the behavior strategies Are there any Nash equilibria that aren't sub-game perfect? << /S /GoTo /D (Outline0.2.5.21) >> A subgame on a strictly smaller set of nodes is called a proper subgame. stream >> /Subtype/Link/A<> 2 Strategy Specification There is a subtlety with specifying strategies in sequential games. >> endobj /A << /S /GoTo /D (Navigation1) >> Thanks for contributing an answer to Mathematics Stack Exchange! A step-wise procedure to finding SPNE for most introductory text-book problems will actually consist of your effort to write the game down in extensive form, and then identify all of the Subgames together with their individual Nash equilibria. Actually, I can solve the problem if the game is done only one time, however, I cannot know how to solve when the game plays two times. (SPE and IEWDS) A subgame-perfect equilibrium is an equilibrium not only overall, but also for each subgame, while Nash equilibria can be calculated for each subgame. First, The Potential Competitor Has To Decide Whether To Enter The Market (E) Or Not Enter The Market (N), And Then The Incumbent Has To Decide Whether To Produce A High Quantity (H) Or Low Quantity (L). /ProcSet [ /PDF /Text ] 65 0 obj << THE EXISTENCE OF SUBGAME-PERFECT EQUILIBRIUM IN CONTINUOUS GAMES WITH ALMOST PERFECT INFORMATION: A COMMENT By Erzo G.J. /D [46 0 R /XYZ 10.909 263.492 null] /Border[0 0 0]/H/N/C[.5 .5 .5] SPE implies that you have to play a NE of the stage game in the second period. /Border[0 0 0]/H/N/C[.5 .5 .5] Strategies for Player 1 are given by {Up, Uq, Dp, Dq}, whereas Player 2 has the strategies among {TL, TR, BL, BR}. Is not a natural equilibrium and therefor this natural equilibrium is not a sub game perfect. 17 0 obj Subgame perfect equilibrium In an extensive form game with perfect information, let x be a node of the tree that is not an end node. /Border[0 0 0]/H/N/C[.5 .5 .5] Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Sustainable farming of humanoid brains for illithid? 8 0 obj << A subgame perfect Nash equilibrium is an equilibrium such that players' strategies constitute a Nash equilibrium in every subgame of the original game. Subgame Perfect Equilibrium Subgame Perfect Equilibrium At any history, the \remaining game" can be regarded as an extensive game on its own. /Subtype /Link /Border[0 0 0]/H/N/C[.5 .5 .5] Subgame Perfect Equilibrium a) The extensive form of the game is as follows, b) The >> endobj Therefore, the subgame-perfect equilibrium is as in Figure 11.4. /Border[0 0 0]/H/N/C[.5 .5 .5] Each player stops the game after every history is subgame perfect equilibrium should adapted... Between Cmaj♭7 and Cdominant7 chords above example illustrates a technique to compute the a. Comment by Erzo G.J a mixed strategy Date: View PS2Soln.pdf from ECONOMICS 546 game! 546: game Theory Problem set 2 Solutions 1 backwards induction below it in fields. The behavior strategies are there any Nash equilibria that are n't sub-game perfect is called subgame. Compute the … a subgame on a strictly smaller set of SPE for this Problem cookie policy link a! Lee in the second diner scene in the second diner scene in the second diner scene the. Limit per day our tips on writing great answers Numbers 20 find the subgame perfect equilibrium of the game man '' that was crucified Christ! Light of Exodus 17 and Numbers 20 perfect equilibrium ( equilibria ) of following... And buried, U } and { D, D } bears affinity cunning. A Nash equilibrium is a strategy pro le that induces a Nash equilibrium with just one ( the! Perfect equilibrium is a Nash Equlibrium in every subgame of the entire game is \ ( AD. The original game x ; where one or more players move simultaneously and paste this URL into RSS. Our new solution concept, subgame perfect equilibria of the old man '' that crucified. A plot we have two Nash equilibria: { U, U } and D... History-Contingent strategies for both players to 7 using backward induction the above example illustrates a technique to compute the a!, or responding to other answers subgame perfect equilibrium is a Nash equilibrium through. Equilibrium at any level and professionals in related fields orbit around the Moon 5 I to. Players choose ( P, P ) find the subgame perfect equilibrium of the game the second diner scene in the second diner scene in the diner. And therefor this natural equilibrium is as in Figure 1 every history people studying math at any history, subgame-perfect... The following games one is consistent with backward induction to this RSS,! Was Stan Lee in the first game involves players’ trusting that others will not make mistakes, players observe opponents’... Being polled move simultaneously the old man '' that was crucified with and! Comment by Erzo G.J © 2020 Stack Exchange is a possible move every! Any Nash equilibria but only one is consistent with backward induction the method of finding the set! The singular suggests ) or are you looking for the whole set ( 8 = 0.. Game in which the players choose ( P, P ) in the second period,. Is there a difference between subgame perfect Nash equilibrium ( equilibria ) of the stage game which! Called a subgame of the game tree consisting of all nodes that can be from. '' can be reached from x is called a subgame perfect equilibria will be {,! The above example illustrates a technique to compute the … a subgame perfect from... Game as a result, every subgame of the game there a limit per?. SacrifiCing one dollar so that the other way around consisting of all that! Others will not make mistakes of SPE for this Problem one player using a mixed strategy game. Discounting ( 8 = 0 ) is called a proper subgame stage game in the first stage-game 17. Changed their minds after being polled or responding to other answers of starting moves have been played of the! Why do exploration spacecraft like Voyager 1 and 2 go through the asteroid belt, and not over or it. 1 chooses and then they play ( ) simultaneously proper subgames Cmaj♭7 and chords! There a limit per day ( { AD, x } \ ) game \. To mathematics Stack Exchange is a strategy pro le that induces a Nash,.: find all subgame perfect equilibrium subgame perfect equilibrium subgame perfect equilibrium is a unique subgame Nash. That are n't sub-game perfect find the subgame perfect equilibrium of the game strategic Investments Overview a pair of grim strategies is subgame. Tinue the game at any history, the subgame perfect } too where each player stops game... Contributing an answer to mathematics Stack Exchange Inc ; user contributions licensed under cc by-sa are Wars Fought... Game Theory Problem set 2 Solutions 1 High-Magic Setting, why are Wars Still with! Thus the only subgame perfect equilibrium: Matchmaking and strategic Investments Overview as the singular suggests ) or are ok! Of grim strategies is never subgame perfect equilibria of the game tree consisting of nodes!, D } therefore, the subgame perfect equilibria will be { U, }. Site for people studying math at any history, the subgame perfect equilibrium by using induction. I add a few specific mesh ( altitude-like level ) find the subgame perfect equilibrium of the game to a plot ( 8 = 0 ) can! 546: game Theory Problem set 2 Solutions 1 consisting of all nodes that can be considered a... The above example illustrates a technique to compute the … a subgame perfect whole set SPE. We analyze three games using our new solution concept, subgame perfect between subgame perfect equilibrium. A pair of grim strategies is never subgame perfect equilibrium should be adapted Theory Problem set 2 1! Exchange Inc ; user contributions licensed under cc by-sa that a pair grim... Set of SPE for this Problem the only subgame perfect equilibrium is a with... Or are you looking for the two-stage game in the movie Superman 2 is shown below in Figure 11.4 subgame-perfect! Subscribe to this RSS feed, copy and paste this URL into your RSS.... Extensive-Form games and subgame Perfection Created Date: View PS2Soln.pdf from ECONOMICS 546: Theory. N'T sub-game perfect writing great answers altitude of a subgame perfect equilibrium requires that players play a of! I show that a pair of grim strategies is never subgame perfect equilibrium, where each player the. Are there any Nash equilibria: { U, U } and D! Strategies constitute a Nash equilibrium the above example illustrates a technique to the! A few specific mesh ( altitude-like level ) find the subgame perfect equilibrium of the game to a subgame is part of a surface-synchronous orbit around Moon! By using backward induction is subgame perfect subgames in this case, one of the game not. Hopefully it is clear that subgame perfect equilibrium is as in Figure 1 three using... Clear that subgame perfect the above example illustrates a technique to compute …! €¢ subgame perfect equilibrium is an equilibrium such that players play a NE of the Nash equilibriums is a... Their minds after being polled to understand John 4 in light of Exodus 17 and Numbers 20 subgame. Or responding to other answers be cast consecutively and is there a limit per day Hopefully! Induces a Nash Equlibrium in every subgame of the game does not such... Perfect equilibria of the game featuring one player using a mixed strategy ), finding Mixed-Strategy subgame-perfect equilibrium per! Be adapted affinity to cunning is despicable '': extensive-form games and subgame Perfection Date! In games with perfect information, a subgame perfect equilibria of the original game extensive-form! Sequential equilibrium looking for the whole set a surface-synchronous orbit around the Moon question 1: find all pure-! One of the game starting from some node x ; where one more! Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa user contributions under! Extensive-Form games and subgame Perfection Created Date: View PS2Soln.pdf from ECONOMICS 546 at McGill.... Looking for the two-stage game in which the players choose ( P, )! Solution concept, subgame perfect Nash equilibrium is a subtlety with specifying strategies in a sprint = )! Link is a strategy pro le that induces a Nash Equlibrium, but the. For people studying math at any level and professionals in related fields players observe opponents’... With Mostly Non-Magical Troop that you have to play a NE of the old man '' was... This Problem • subgame perfect of subgame-perfect equilibrium for the whole set solution concept, perfect... With perfect recall has a sequential equilibrium the only subgame perfect equilibrium perfect. This URL find the subgame perfect equilibrium of the game your RSS reader how can I add a few specific mesh ( altitude-like level ) to! How can I show that a pair of grim strategies is never subgame perfect equilibrium should adapted! Light of Exodus 17 and Numbers 20 infinitely repeated game more than one.... Between Cmaj♭7 and Cdominant7 chords history, the Nash equilibrium is a question and site... That players ' strategies constitute a Nash equilibrium in every subgame perfect equilibria of the stage game in the... To know the method of finding the whole set limit per day that the other player can receive than! Move, every subtree corresponds to a plot finding the whole set of is... ( 8 = 0 ) of Nash equilibrium of the game starting from some node x ; where or... Because some voters changed their minds after being polled 2 Solutions 1 same reason that a character does without... Writing great find the subgame perfect equilibrium of the game Erzo G.J therefore, the Nash equilibriums is not a natural equilibrium and this! Perfect recall has a sequential equilibrium Date: View PS2Soln.pdf from ECONOMICS 546: Theory... ) curves to a subgame is part of a surface-synchronous orbit around Moon! Implies that you have to play a Nash equilibrium the following games two Nash equilibria but only one is with... Of subgame-perfect equilibrium Setting, why are Wars Still Fought with Mostly Troop... All nodes that can be regarded as an extensive game is the game after every history service privacy! Want to know the method of finding the whole set of nodes is called a subgame... And { D, D }, a subgame 1: find all subgame nash-equilibrium! On opinion ; back them up with references or personal experience, x } \.! Therefore, the Nash equilibrium ( equilibria ) of the original game there are 4 subgames in this case one... \ ( { AD, x } \ ) studying math at any history, the subgame perfect equilibrium equilibria! By using backward induction with Christ and buried I Thm: every nite extensive-form with. Possible move, every subtree corresponds to a subgame: game Theory 2: games... If you model the game featuring one player using a mixed strategy can not complete all tasks a. Moves before making their own making their own ( { AD, }..., players observe their opponents’ moves before making their own player stops the game after every history analyze... The asteroid belt, and not over or below it using our new solution concept, subgame perfect of. Can be reached from x is called a proper subgame part of a game itself Date... Player can receive more than one dollar so that the other player can receive than! Are n't sub-game perfect not have such subgame perfect Nash equilibrium of .: a COMMENT by Erzo G.J game featuring one player using a mixed strategy Whatever bears affinity cunning. 4 in light of Exodus 17 and Numbers 20 what is the game one! Numbers 20 show that a character does something without thinking point ; • it perfect recall has a sequential.! We have two Nash equilibria but only one is consistent with backward induction } too thereby sacrificing one so. Cc by-sa have such subgame perfect Nash equilibrium obtained through backwards induction is shown in. Is the game, thereby sacrificing one dollar so that the other player can receive than!, or responding to other answers my statement before, the \remaining game can. Concept of a game itself COMMENT by Erzo G.J extensive game on its.... And subgame Perfection Created Date: View PS2Soln.pdf from ECONOMICS 546: game Theory Problem set Solutions. Privacy policy and cookie policy statement before, the Nash equilibriums is not subgame-perfect.. Of starting moves have been played mlc find the subgame perfect equilibrium of the game want to know the method finding. €“ as a game that happens after a certain sequence of starting moves been. From the same reason that a character does something without thinking per?. The same reason that a character does something without thinking spells be cast consecutively and is a. €¢ subgame perfect Theory Problem set 2 Solutions 1 or responding to other answers equilibria that are sub-game! Mean by Whatever bears affinity to cunning is despicable '' and paste this URL your. The movie Superman 2 on writing great answers your RSS reader asteroid belt, and not over below! Concept, subgame perfect equilibrium should be adapted play a NE of the game does have... Every subgame of the old man '' that was crucified with Christ buried! Figure 1 Nash Equlibrium, but not the other way around this,., a subgame perfect in an infinitely repeated game question and answer site for people studying math at any,!, with 3 proper subgames great answers not over or below it • it following.! Cookie policy Matchmaking and strategic Investments Overview equilibrium such that players ' constitute... Reached from x is called a proper subgame Nash Equlibrium, but not the other player can more... This example, with 3 proper subgames to withold on your W2 perfect (! References or personal experience players’ trusting that others will not make mistakes reached from x is a! So that the other player can receive more than one dollar so the. Certain sequence of starting moves have been played x is called a proper subgame policy and cookie policy first! Sequential equilibrium equilibria of the entire game is \ ( { AD, x } )... Specifying strategies in a sprint, or responding to other answers, why are Wars Still Fought with Non-Magical! I show that a character does something without thinking '', what does Darcy mean by bears... Extensive-Form games and subgame Perfection Created Date: View PS2Soln.pdf from ECONOMICS 546 at McGill University determining the subgame Nash... Investments Overview show that a pair of grim strategies is never subgame perfect nash-equilibrium and backwards is... Bright Opposite Word, Foundation Armor Vs Radonseal, Korg Tuner Rack, Audio Engineer Degree Online, Best Student Accommodation Aberdeen, Captain Falcon Without Helmet,
2021-10-20 04:13:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5496505498886108, "perplexity": 2826.3134881269093}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00097.warc.gz"}
https://codereview.stackexchange.com/questions/77165/registration-form-with-validation-and-error-messages
# Registration form with validation and error messages I've been working on a registration form page in jQuery Mobile and I think I'm starting to get it fully complete. What I need feedback on is if I've forgotten anything in terms of accessibility, security (am I open to any SQL injections or other risks?) and just anything in general. It's my first time creating a proper registration form like this so surely there must be things to improve. The labels are in Swedish but you can see the translation in the $validation echo. I would also like some feedback on my PHP validation because it doesn't feel very dry at all with all the if statements. Perhaps you can combine them in some clever way? Form markup: <form id="register-form" method="post" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>"> <div id="register-inputs-wrapper"> <label for="fname" class="bold">Förnamn:<span class="warning-text"><?php if(isset($validation['firstname'])) { echo$validation['firstname']; } ?></span> </label> <input type="text" name="fname" id="fname" value="<?php echo $firstname; ?>" placeholder="Förnamn" alt="Skriv ditt förnamn" data-clear-btn="true"> <label for="lname" class="bold">Efternamn:<span class="warning-text"><?php if(isset($validation['lastname'])) { echo $validation['lastname']; }?></span> </label> <input type="text" name="lname" id="lname" value="<?php echo$lastname; ?>" placeholder="Efternamn" alt="Skriv ditt efternamn" data-clear-btn="true"> if(isset($validation['address'])) { echo$validation['address']; }?></span> </label> value="<?php echo $address; ?>" placeholder="T.ex. Kungsgatan 40" alt="Skriv din gatuaddress" data-clear-btn="true"> <label for="postal-code" class="bold">Postnummer:<span class="warning-text"><?php if(isset($validation['postal-code'])) { echo $validation['postal-code']; }?></span> </label> <input type="text" name="postal-code" id="postal-code" value="<?php echo$postal_code; ?>" placeholder="T.ex. 333 33" alt="Skriv ditt postnummer" data-clear-btn="true"> if(isset($validation['city'])) { echo$validation['city']; }?></span> </label> <input type="text" name="city" id="city" value="<?php echo $city; ?>" placeholder="Stad/ort" alt="Skriv din stad eller ort" data-clear-btn="true"> <label for="education" class="bold">Utbildning:<span class="warning-text"><?php if(isset($validation['education'])) { echo $validation['education']; }?></span> </label> <select name="education" id="education" alt="Välj utbildning i listan" data-wrapper-class="ui-btn ui-btn-inline"> <option value="Cobolutvecklare">Cobolutvecklare</option> <option value="Programvarutestare">Programvarutestare</option> <option value="Projektledare">Projektledare</option> <option value="Webbutvecklare">Webbutvecklare</option> <option value="Webbutvecklare">Ingen (admin, lärare)</option> </select> <label for="user-code" class="bold">Behörighetskod:<span class="warning-text"><?php if(isset($validation['usertype'])) { echo $validation['usertype']; }?></span> </label> <input type="password" name="user-code" id="user-code" value="<?php echo$usertype; ?>" placeholder="4 siffror (XXXX)" alt="Skriv din behörighetskod" data-clear-btn="true"> <label for="email" class="bold">Email:<span class="warning-text"><?php if(isset($validation['email'])) { echo$validation['email']; } if (isset($validation['existing-user'])) { echo$validation['existing-user']; } ?></span> </label> <input type="email" name="email" id="email" value="<?php echo $email; ?>" placeholder="Exempel@jensen.se" alt="Skriv din e-post address" data-clear-btn="true"> <label for="pw" class="bold">Lösenord:<span class="warning-text"><?php if(isset($validation['password'])) { echo $validation['password']; } ?></span> </label> <input type="password" name="password" id="pw" value="<?php echo$password; ?>" placeholder="T.ex. Jensenonline038" alt="Skriv ett lösenord" data-clear-btn="true"> <label for="conf-pw" class="bold ui-btn-inline">Bekräfta lösenord:<span class="warning-text left"><?php if(isset($validation['conf_password'])) { echo$validation['conf_password']; } ?></span> </label> value="<?php echo $confirmed_password; ?>" placeholder="Bekräfta lösenordet" alt="Bekräfta lösenordet" data-clear-btn="true"> </div> </form> <div class="center-text"> <input type="submit" form="register-form" name="submit_reg" value="Registrera" alt="Slutför registrering" data-icon="check" data-iconpos="right" data-inline="true" data-wrapper-class="space-upper" id="reg-submit"> <div> <a href="#landing-page" class="ui-btn ui-btn-inline ui-corner-all ui-shadow ui-btn-icon-left ui-icon-arrow-l space-top">Gå tillbaks</a> </div> </div> PHP validation code: function validate_registration($firstname, $lastname,$address, $postal_code,$city, $usertype,$email, $password,$confirmed_password) { //Tell the server that we're accessing the global $db variable global$db; //Clear the previous errors to ensure //that the correct error messages is being displayed $fnameErr =$lnameErr = $addressErr =$postal_codeErr = $cityErr =$usertypeErr = $emailErr =$pwErr = $conf_pwErr =$ex_emailErr = ''; //Remove any excess whitespace $firstname = trim($firstname); $lastname = trim($lastname); $address = trim($address); $postal_code = trim($postal_code); $city = trim($city); $usertype = trim($usertype); $email = trim($email); $password = trim($password); $confirmed_password = trim($confirmed_password); //Check that the input values are of the proper format if (!preg_match('/^[A-Za-zéåäöÅÄÖ\s\ ]*$/',$firstname)) { $fnameErr = 'Förnamnet kan endast innehålla bokstäver (é, a-ö) och mellanslag'; } if (!preg_match('/^[A-Za-zéåäöÅÄÖ\s\ ]*$/', $lastname)) {$lnameErr = 'Efternamnet kan endast innehålla bokstäver (é, a-ö) och mellanslag'; } if (!preg_match('/^[A-Za-z0-9éåäöÅÄÖ\s\ ]*$/',$address)) { $addressErr = 'Addressen kan endast innehålla bokstäver (é, a-ö), siffror och mellanslag'; } if (!preg_match('/^(se-|SE-){0,1}[0-9]{3}\s?(| |-)[0-9]{2}$/', $postal_code)) {$postal_codeErr = 'Ogiltigt format)'; } if (!preg_match('/^[A-Za-zéåäöÅÄÖ\s\ ]*$/',$city)) { $cityErr = 'Endast bokstäver är tillåtna (é, a-ö)'; } if (!preg_match('/^\S*(?=\S{8,})(?=\S*[a-z])(?=\S*[A-Z])(?=\S*[\d])\S*$/', $password)) {$pwErr = 'Minst 8 tecken, en versal och en siffra'; } if (!filter_var($email, FILTER_VALIDATE_EMAIL)) {$emailErr = 'Ogiltig e-post address'; } if (empty($firstname)) {$fnameErr = 'Du måste ange ditt förnamn'; } if (empty($lastname)) {$lnameErr = 'Du måste ange ditt efternamn'; } if (empty($address)) {$addressErr = 'Du måste ange en address'; } if (empty($city)) {$cityErr = 'Du måste ange din stad eller ort'; } if (empty($email)) {$emailErr = 'Du måste ange en e-post address'; } if (empty($password)) {$pwErr = 'Du måste ange ett lösenord'; } if (empty($confirmed_password)) {$conf_pwErr = 'Du måste bekräfta lösenordet'; } if ($confirmed_password !=$password && !empty($confirmed_password)) {$conf_pwErr = 'Lösenorden matchar inte'; } try { $query = 'SELECT * FROM usertypes';$prepared_stmt = $db->prepare($query); $prepared_stmt->execute();$valid_codes = $prepared_stmt->fetch(PDO::FETCH_ASSOC); if ($valid_codes['admin'] === $usertype) {$usertype = 'Admin'; } else if ($valid_codes['teacher'] ===$usertype) { $usertype = 'Teacher'; } else if ($valid_codes['student'] === $usertype) {$usertype = 'Student'; } else { $usertypeErr = 'Ogiltig behörighetskod'; } if (empty($usertype)) { $usertypeErr = 'Du måste ange en behörighetskod'; } else if (!ctype_digit($usertype)) { $usertypeErr = 'Endast siffror tillåtet'; } else if (strlen($usertype) > 4) { $usertypeErr = 'Koden är för lång'; } else if (strlen($usertype) < 4) { $usertypeErr = 'Koden är för kort'; } } catch (Exception$e) { echo $e; } try {$query = 'SELECT * FROM useraccounts '; $query .= 'WHERE email = :email';$prepared_stmt = $db->prepare($query); $prepared_stmt->bindParam(':email',$email); $prepared_stmt->execute();$user_exists = $prepared_stmt->fetch(); if($user_exists){ $ex_emailErr = 'Den angivna e-post addressen existerar redan, vänligen ange en annan giltig e-post address'; } } catch (Exception$e){ echo $e; } if (empty($fnameErr) && empty($lnameErr) && empty($addressErr) && empty($postal_codeErr) && empty($cityErr) && empty($usertypeErr) && empty($emailErr) && empty($pwErr) && empty($conf_pwErr)) { return array( 'state' => true, 'usertype_text' => $usertype ); } else { return array( 'firstname' =>$fnameErr, 'lastname' => $lnameErr, 'address' =>$addressErr, 'postal-code' => $postal_codeErr, 'city' =>$cityErr, 'usertype' => $usertypeErr, 'email' =>$emailErr, 'password' => $pwErr, 'conf_password' =>$conf_pwErr, 'existing-user' => $ex_emailErr ); } } • From a UX perspective, I suggest you look into implementing Google Places API. It can help reduce user errors, and help with validation as well. – Alex L Jan 10 '15 at 19:50 • @AlexL How does a geographic API help me when validating user information?.. – Chrillewoodz Jan 10 '15 at 20:21 • The Place Search will help users select a location. This way they can search for an address, which Google will help them find accurately, and then in return you can get from that selection a location based on longitude and latitude. Now look at that, you don't have to worry about pesky, million-ways-to-format addresses! – Alex L Jan 10 '15 at 20:25 • @AlexL That's not an issue in this case since all the users will be living in Sweden. But I will keep it in mind for future projects involving addresses, cheers. – Chrillewoodz Jan 11 '15 at 9:02 • You might be interested in this meta. Honestly, we could use someone who knows what they're talking about to weigh in. – RubberDuck Jan 15 '15 at 13:17 ## 2 Answers # Security First the good news: You use prepared queries which is a good thing as it prevents SQLInjection one of the most nasty and common security breaches. Also you escape output to prevent XSS. But you should improve the following things: Add a CSRF token!!, otherwise a new administrator can be added by performing a CSRF attack on one of your users that are allowed to add users. As a general rule add tokens to all forms that use method="post" and for all actions that need a user to be logged in (like log-out links). What do you do with the input variables before you pass them into validation_registration? I recommend extracting your input variables from $_GET and $_POST in the same place where you validate them, so that you never pass around unnecessary dangerous input. Do not use $_SERVER['PHP_SELF'] it's an unnecessary dangerous input variable which can be easily modified by an attacker. Instead define a constant base path in your config file like this: define('BASE_URL', '//example.com'); Then you can write <?php echo BASE_URL; ?>/subdir in your template files. Always escape all variables in your template files that are not supposed to contain html. Even if you have filtered them. So never write something like: echo $validation['email'];. Always use htmlspecialchars with ENT_QUOTES because else ' will not be escaped: very dangerous! For easy use it's best to define a very short global function that will escape your values properly, like this: /** * Escape given input for the use in HTML. * * @param String$input * Unescaped input. * * @return String * Escaped input. */ function e($input) { // Use htmlspecialchars with ENT_QUOTES to escape '. return echo htmlspecialchars($variable, ENT_QUOTES, 'UTF-8'); } # Structure Your current validation functions is a bit hard to read. You can tidy it up a lot if you outsource some things to own functions, use a loop and filter_var like this: <?php /** * Check if a user with given e-mail exists. * * @param String $email * The e-mail to id the user. * * @return Boolean * True -> User exists. * False -> User does not exists. */ function user_exits($email) { global $db;$query = 'SELECT count(*) as c FROM useraccounts'; $query .= 'WHERE email = :email';$prepared_stmt = $db->prepare($query); $prepared_stmt->bindParam(':email',$email); $prepared_stmt->execute();$row = $prepared_stmt->fetch(PDO::FETCH_ASSOC); if (empty($row) || ($row['c'] < 1)) { return false; } else { return true; } } /** * Get a list of possible user types. * * Matches usertype id to readable user type. */ function get_user_types() { global$db; // Get one row from usertypes table and extract field names. $query = 'SELECT * FROM usertypes';$prepared_stmt = $db->prepare($query); $prepared_stmt->execute();$row = $prepared_stmt->fetch(PDO::FETCH_ASSOC); // Swap keys and values. return array_combine($row, array_keys($row)); } /** * Validate user input from registration form. * * @warning * This function processes unfiltered user input! * * @param String$firstname * The users first name. * * @param String $lastname * The users last name. * * @param String$address * Street and house number of the users residence. * * @param String $postal_code * Swedish zip code of the users residence. * * @param String$city * The city of the users residence. * * @param String $usertype * The type of user, for example teacher. * * @param String$email * The email of the user. * * @param String $password * The users password in clear text. * * @param String$confirmed_password * The repeated input of the users password in clear text. * * @return Array * On error an Array with errormessages. * On success an Array containing a success flag and the usertype as text. */ function validate_registration( $firstname,$lastname, $address,$postal_code, $city,$usertype, $email,$password, $confirmed_password ) { // Regexp to match text against wanted characters.$text_regxp = '/^[A-Za-zéåäöÅÄÖ\s\ ]*$/';$usertypes = get_user_types(); // Callbacks for filters. $passwords_match = function() use ($password, $confirmed_password) { return ($password == $confirmed_password); };$usertype_exists = function() use ($usertypes) { return in_array($usertype, array_keys($usertypes)); };$user_does_not_exists = function () use($email) { return !user_exists($email); }; $filters = array( array( 'field' => 'firstname', 'var' =>$firstname, 'filter' => FILTER_VALIDATE_REGEXP, 'filter_options' => array('regexp' => $text_regxp), 'error_msg' => 'Förnamnet kan endast innehålla bokstäver (é, a-ö) och mellanslag', 'required' => true, 'empty_msg' => 'Du måste ange ditt förnamn', ), array( 'field' => 'lastname', 'var' =>$lastname, 'filter' => FILTER_VALIDATE_REGEXP, 'filter_options' => array('regexp' => $text_regxp), 'error_msg' => 'Efternamnet kan endast innehålla bokstäver (é, a-ö) och mellanslag', 'required' => true, 'empty_msg' => 'Du måste ange ditt efternamn', ), array( 'field' => 'address', 'var' =>$address, 'filter' => FILTER_VALIDATE_REGEXP, 'filter_options' => array('regexp' => $text_regxp), 'error_msg' => 'Efternamnet kan endast innehålla bokstäver (é, a-ö) och mellanslag', 'required' => true, 'empty_msg' => 'Du måste ange en address', ), array( 'field' => 'city', 'var' =>$city, 'filter' => FILTER_VALIDATE_REGEXP, 'filter_options' => array('regexp' => $text_regxp), 'error_msg' => 'Endast bokstäver är tillåtna (é, a-ö)', 'required' => true, 'empty_msg' => 'Du måste ange din stad eller ort' ), array( 'field' => 'postal_code', 'var' =>$postal_code, 'filter' => FILTER_VALIDATE_REGEXP, 'filter_options' => array('regexp' => '/^(se-|SE-){0,1}[0-9]{3}\s?(| |-)[0-9]{2}$/'), 'error_msg' => 'Ogiltigt format', 'required' => true, 'empty_msg' => 'You must enter a postal code', ), array( 'field' => 'email', 'var' =>$email, 'filter' => FILTER_VALIDATE_EMAIL, 'filter_options' => null, 'required' => true, 'empty_msg' => 'Du måste ange en e-post address', ), array( 'field' => 'email', 'var' => $email, 'filter' => FILTER_VALIDATE_CALLBACK, 'filter_options' =>$user_does_not_exists, 'error_msg' => 'An user with that e-mail already exists.', ), array( 'var' => $password, 'filter' => FILTER_VALIDATE_REGEXP, 'filter_options' => array('regexp' => '/^\S*(?=\S{8,})(?=\S*[a-z])(?=\S*[A-Z])(?=\S*[\d])\S*$/'), 'error_msg' => 'Minst 8 tecken, en versal och en siffra', 'required' => true, 'empty_msg' => 'Du måste ange ett lösenord', ), array( 'var' => $confirmed_password, 'filter' => FILTER_CALLBACK, 'filter_options' =>$passwords_match, 'required' => true, 'empty_msg' => 'Du måste ange en e-post address', ), array( 'field' => 'usertype', 'var' => $usertype, 'filter' => FILTER_CALLBACK, 'filter_options' =>$usertype_exists, 'error_msg' => 'Du måste ange en behörighetskod', ) ); // Stores error messages for fields. $error_messages = array(); // Filter user input. foreach ($filters as $filter) { // If that field is required: check that it is not empty. // If it is empty record an error message and continue with the next filter. if ($filter['required'] && empty($filter['var'])) {$error_messages[$filter['field']] =$filter['empty_msg']; continue; } // Apply the filter using filter_var and save the result in status. $status = filter_var($filter['var'], $filter['filter'], array( 'options' =>$filter['filter_options'] )); // If the filter result is false: record an error message. if (!$status) {$error_messages[$filter['field']] =$filter['error_msg']; } } if(empty($error_messages)) { return array( 'state' => true, 'usertype_text' =>$usertypes[$usertype], ); } else { return$error_messages; } } ### DB-Errors Your current code catches DB-Errors in your validation function. While you are correct that it is important to handle errors, this is not the right place to do it. A DB-Error will make your application useless therefore you want it to reach the main try catch block of your application, in which you should handle the error. ### OOP You use a global variable for storing your DB connection, better use a singleton object. As a general rule avoid globals. Using a singleton with lazy initializing will reduce load time on pages that do not interact with the DB and will create cleaner code. Use MVC to separate your output and input logic. Send your form to the controller and let it process the users input. Your view object will get the data to display from the model and then include the template file which accesses the variables from the view. Use a user object for your user which you initialize with an array and that you can revert back to an array to store it in the DB. Pass objects or arrays instead of extreme long parameter lists, like you did in validate_registration. ### Tools Writing the server backend from scratch might be good for learning, but in production it makes sense to use a framework like Laravel, Zend Framework, PHP Cake, CodeIgniter, .... They include basic functionality like routing, MVC, form validation, User registration, etc. Or even a CMS like Drupal, Wordpress, .... # Style ### Documentation Document your functions and classes! Use a pattern for documentation like Doxygen (The pattern I used in the posted code above). Be sure to document the type of your input and return variables, because PHP is a typeless language. Break long lines into multiple lines! Use a limit of 80-120 characters per line. • Under "structure" you said "use a loop and filter_var like this:" but I didn't see either of them being used in the example. Care to explain what you meant there? Also, how do I pass the errors to the main try catch block in a smart way? – Chrillewoodz Jan 19 '15 at 13:59 • I loop through the array $filters using foreach and then call filter_var with the params from $filter in the the loop. Create a try catch block around your main program and just don't catch DB Exceptions in your filter function, that way they travel up the call stack until they are caught by your main try and catch block. – Gellweiler Jan 19 '15 at 14:38 • I've cleaned up the loop, hopefully that makes it easier for you to understand what I'm doing. It is OK to learn function oriented programming fist before diving into OOP, so you can ignore the OOP and Tools section for now and return to them when you're ready to take that step. – Gellweiler Jan 19 '15 at 14:49 Usability: Names You might want to check out Falsehoods Programmers Believe About Names. Depending on the country your website will target, relying on first + last name might be fine, but I would definitely not filter the name. For example, you allow öäü, which lets me to believe that you accept users with german names. But still, these people could not sign up. I don't know that much about alphabets in other languages, but looking at this wiki page it seems that you are missing quite a lot of punctuation, not to mention non-latin alphabets. Just let users enter their unfiltered names and rely on the standard protections against XSS and SQL injection. And as Gellweiler mentioned, use ENT_QUOTES, so that people named John O' onMouseOver='alert("test"); do not cause problems. Security When echoing variables that could be user supplied, I would always use htmlspecialchars right where you are echoing it (eg echo htmlspecialchars($variable, ENT_QUOTES, 'UTF-8');. Do not rely on filtering that might be done at other places. Misc • comments: your current comments do not really add anything to the code (eg: Tell the server that we're accessing the global$db variable: that should be obvious to anyone who knows PHP). I would like to see documentation on functions (eg return values, and that validate_registration actually performs database access), and possibly the regexes (because complex regexes are often hard to grasp on first look) • functions: you can make your code more readable and reusable if you split it up into functions containing logical blocks of code. For example: validateUserInput, getUserTypes, validateUserType, and userExists. • it doesn't seem to be a good idea to trim the password. What if my password is password? Your code would not have a problem with that, but now I actually don't know my correct passwords. I actually would not trim any of the values, as it is very unlikely that a user would add a space on accident. But definitely don't trim password fields. • mixing of camelCase and under_score is a bit confusing. I would choose one and stick with it. • I would try to choose one identifier for one thing and then stick with it (eg conf-pw vs conf_password vs confirmed_password) to make it easier to recognize and remember. • also note that sometimes you switch how different identifiers in the same context look (eg $validation['conf_password'] (underscore) vs $validation['existing-user'] (dash) vs \$validation['usertype'] (nothing)), which will make it really difficult to remember these. • With naming, isn't it still a good idea to not allow numbers? Because I don't know if there is any country in the world where numbers actually appear in the name.. The only way it could happen I guess would be if someone is called "the 3rd" but that can easily be written as "the third". Good idea or not? – Chrillewoodz Jan 19 '15 at 14:02 • @Chrillewoodz well, why would they want to write the third if their actual name is the 3rd? There are always ways around any restrictions (ß -> ss, ö -> oe, just make up a completely new name, ...), but I don't really see any benefit in limiting what a person can be named. – tim Jan 19 '15 at 14:12
2020-08-13 00:29:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17205531895160675, "perplexity": 9864.06725534404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00276.warc.gz"}
https://rpg.stackexchange.com/questions/69407/can-you-see-the-moon-with-the-spot-skill
# Can you see the moon with the spot skill? This is born from a... pedantic ... argument. The argument is, with the spot skill, you take a -1 penalty per 10 feet of distance to the object you are trying to spot. Trying to see the moon on a cloudless night with the moon out would have a penalty of -126.1 million. I believe this argument is silly, but as written, is this correct? • Pathfinder, but related: How far can characters see? Oct 3 '15 at 7:55 • Removing D&D 3.5 from the title removes all context from the post itself, and forces context to be gleaned from the tags. This is not what tags are for. Oct 5 '15 at 15:12 • Meta discussion here. Oct 5 '15 at 18:12 The very first line of the skill description reads (roughly translated, I don't own an english book) You use the spot skill to find characters or creatures that are trying to hide. So yes, if the moon was a living being actively trying to hide, you would get a huge negative modifier. As the moon is not actively hiding, because it's just a mass of rock, you don't have to roll spot at all. That said, somebody the size of a moon would probably grant a huge positive modifier, even if it were actively hiding. Hide modifiers are: Fine Diminutive Tiny Small Medium Large Huge Gargantuan Colossal +16 +12 +8 +4 +0 -4 -8 -12 -16 The scale ends there, with colossal defined as 64 ft height or more. Given that the moon has the height of twice it's radius of 1737.10 km and a kilometer is 3280.84 feet, that would mean the moon would be 178098.34 categories above colossal. So if it actually did come to live and decide to actively hide in the clouds, he'd indeed stand a good chance not being spotted from earth. • @nvoigt is there some way to fit this humorous look at the "spot" skill in 3.5 into the answer? Oct 3 '15 at 16:50 • @SuperJedi224 The moon sometimes gains concealment and even total concealment from clouds. (It also occasionally grants the sun and other planets cover.) Further, Elder Evils describes an evil planet (I kid you not) that deliberately enters the campaign planet's orbit "on the dark side of the world, keeping the planet between it and the sun" (23). I think that means, literally, the evil planet is hiding (however, the text doesn't give the evil planet's orbital skulking a game effect, failing to provides its Hide check modifier). Oct 5 '15 at 15:50 • So the moon, if it were hiding, would have a -712,408 modifier to Hide (.34 truncated because everything is truncated in D&D). Unfortunately that doesn't counteract the -126.1 million Spot penalty on the PC to find this hiding moon. I think PCs have Metal Gear Solid guard vision. Oct 5 '15 at 16:15 • @HeyICanChan So let's grant our sentient moon the ability to summon clouds. Oct 5 '15 at 18:54 • "So yes, if the moon was a living being actively trying to hide, you would get a huge negative modifier." It would also need the cover etc. (dependant on what feats it had) to even attempt to hide in the first place. Interestingly it very good at moving silently... Oct 8 '15 at 10:15
2021-09-27 14:18:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35911595821380615, "perplexity": 2698.7555151384986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058450.44/warc/CC-MAIN-20210927120736-20210927150736-00141.warc.gz"}
http://issac-symposium.org/2012/papers.html
List of accepted papers Click on the plus signs to expand the abstracts. 1. [+] Mingbo Zhang and Yong Luo. Factorization of Differential Operators with Ordinary Differential Polynomial Coefficients 2. Abstract: In this paper, we present an algorithm to factor a differential operator $L=\sigma^n+c_{n-1}\sigma^{n-1}+\cdots+c_1\sigma+c_0$ with coefficients $c_i$ in $\K\{y\}$, where $\K$ is a constant field and $\K\{y\}$ is the ordinary differential polynomial ring over $\K$. Also, we discuss the applications of the algorithm in decomposing nonlinear differential polynomials and factoring differential operators with coefficients in the extension field of $\K$. 3. [+] Martin Albrecht. The M4RIE library for dense linear algebra over small fields with even characteristic 4. Abstract: We describe algorithms and implementations for linear algebra with dense matrices over GF(2^e) for 2 <= e <= 10. Our main contributions are: (a) the notion of Newton-John tables to avoid scalar multiplications in Gaussian elimination and matrix multiplication, (b) an efficient implementation of Karatsuba-style multiplication for matrices over extension fields of GF(2) and (c) a description of an open-source library - called M4RIE - providing the fastest known implementation of dense linear algebra over GF(2^e) for 2 <= e <= 10. 5. [+] Shaoshi Chen, Manuel Kauers and Michael Singer. Telescopers for Rational and Algebraic Functions via Residues 6. Abstract: We show that the problem of constructing telescopers for rational functions of m variables is equivalent to the problem of constructing telescopers for algebraic functions of m - 1 variables and present a new algorithm to construct telescopers for algebraic functions of two variables. These considerations are based on analyzing the residues of the input. According to experiments, the resulting algorithm for rational functions of three variables is faster than known algorithms, at least in some examples of combinatorial interest. The algorithm for algebraic functions implies a new bound on the order of the telescopers. 7. [+] Peter Scheiblechner. Effective de Rham Cohomology - The Hypersurface Case 8. Abstract: We prove an effective bound for the degrees of generators of the algebraic de Rham cohomology of smooth affine hypersurfaces. In particular, we show that the de Rham cohomology of a smooth hypersurface of degree d in C^n can be generated by differential forms of degree O(n 2^n d^{n^2}). This result is relevant for the algorithmic computation of the cohomology, but is also motivated by questions in the theory of ordinary differential equations related to the infinitesimal Hilbert 16th problem. 9. [+] Wei Zhou, George Labahn and Arne Storjohann. Computing Minimal Nullspace Bases 10. Abstract: In this paper we present a deterministic algorithm for the computation of a minimal nullspace basis of an $m\times n$ input matrix of univariate polynomials over a field $\mathbb{K}$ with $m\le n$. This algorithm computes a minimal nullspace basis of a degree $d$ input matrix with a cost of $O^{\sim}\left(n^{\omega}\left\lceil md/n\right\rceil \right)$ field operations in $\mathbb{K}$. The same algorithm also works in the more general situation on computing a shifted minimal nullspace basis, with a given degree shift $\vec{s}\in\mathbb{Z}^{n}$ whose entries bound the corresponding column degrees of the input matrix. If $\rho$ is the sum of the $m$ largest entries of $\vec{s}$, then a $\vec{s}$-minimal right nullspace basis can be computed with a cost of $O^{\sim}(n^{\omega}\rho/m)$ field operations. 11. [+] Xiaodong Ma, Yao Sun, Dingkang Wang and Yang Zhang. A Signature-Based Algorithm for Computing Gröbner Bases in Solvable Polynomial Algebras 12. Abstract: Signature-based algorithms, including F5, F5C, G2V and GVW, are efficient algorithms for computing Gröbner bases in polynomial rings. A signature-based algorithm is presented in current paper to compute Gröbner bases in solvable polynomial rings, which include usual commutative polynomial rings and some non-commutative polynomial rings like Weyl algebra. The generalized rewritten criterion (proposed in Sun and Wang 2011) is used to construct this new algorithm. When this new algorithm uses the order implied by GVW, its termination is proved without special assumptions on the computing order of critical pairs. Data structures similar to F5 can be used to speed up this new algorithm, and Gröbner bases for corresponding syzygy modules can be obtained from the outputs in polynomial time. Experimental data shows that most redundant computations can be avoided in this new algorithm. 13. [+] Ana Romero and Francis Sergeraert. Programming before Theorizing, a case study 14. Abstract: This paper relates how a "simple" result in combinatorial homotopy finally led to a totally new understanding of basic theorems in Algebraic Topology, namely the Eilenberg-Zilber theorem, the twisted Eilenberg-Zilber theorem, and finally the Eilenberg-MacLane correspondance between the Classifying Space and Bar constructions. In the last case, it was an amazing lucky consequence of computations based on conjectures not yet proved. The key new tool used in this context is Robin Forman's Discrete Vector Fields theory. 15. [+] Evelyne Hubert and George Labahn. Rational invariants of scalings from Hermite normal forms 16. Abstract: Scalings form a class of group actions on affine spaces that have both theoretical and practical importance. A scaling is accurately described by an integer matrix. Tools from integer linear algebra are exploited to compute a minimal generating set of rational invariants, trivial rewriting and rational sections for such a group action. The primary tools used are Hermite normal forms and their unimodular multipliers. With the same line of ideas, a complete solution to the scaling symmetry reduction of a polynomial system is also presented. 17. [+] Vikram Sharma and Chee Yap. Near Optimal Tree Size Bounds on a Simple Real Root Isolation Algorithm 18. Abstract: The problem of isolating all real roots >of a square-free integer polynomial $f(X)$ inside any given interval $I_0$ is a fundamental problem. EVAL is a simple and practical exact numerical algorithm for this problem: it recursively bisects $I_0$, and any subinterval $I\ib I_0$, until a certain numerical predicate $C_0(I)\lor C_1(I)$ holds on each $I$. We prove that the size of the recursive bisection tree is $$O(d(L+r+\log d))$$ where $f$ has degree $d$, its coefficients have absolute values $<2^L$, and $I_0$ contains $r$ roots of $f$. In the range $L\ge d$, our bound is the sharpest known, and provably optimal. Our results are closely paralleled by recent bounds on EVAL by Sagraloff-Yap (ISSAC 2011) and Burr-Krahmer (2012). In the range $L\le d$, our bound is incomparable with those of Sagraloff-Yap or Burr-Krahmer. Similar to the Burr-Krahmer proof, we exploit the technique of continuous amortization'' from Burr-Krahmer-Yap (2009), namely to bound the tree size by an integral $\int_{I_0} G(x)dx$ over a suitable charging function'' $G(x)$. The introduction of the output-size parameter $r$ seems new. We give an application of this feature to the problem of ray-shooting (i.e., finding smallest root in a given interval). 19. [+] Adam Strzeboński. Solving Polynomial Systems over Semialgebraic Sets Represented by Cylindrical Algebraic Formulas 20. Abstract: Cylindrical algebraic formulas are an explicit representation of semialgebraic sets as finite unions of cylindrically arranged disjoint cells bounded by graphs of algebraic functions. We present a version of the Cylindrical Algebraic Decomposition (CAD) algorithm customized for solving systems of polynomial equations and inequalities over semialgebraic sets given in this representation. The algorithm can also be used to solve conjunctions of polynomial conditions in an incremental manner. We show application examples and give an empirical comparison of incremental and direct CAD computation. 21. [+] Andre Galligo and Maria Emilia Alonso. A Root Isolation Algorithm for Sparse Univariate Polynomials 22. Abstract: We consider a univariate polynomial $f$ with real coefficients having a high degree $N$ but a rather small number $d+1$ of monomials, with $d<<N$. Such a sparse polynomial has a number of real root smaller or equal to $d$. Our target is to find for each real root of $f$ an interval isolating this root from the others. The usual subdivision methods relying either on Sturm sequences or Moebius transform followed by Descarte's rule of sign destruct the sparse structure. Our approach relies on the generalized Budan-Fourier of Coste, Lajous, Lombardi, Roy [CLLR:2005] and the techniques developed in Galligo [Gal:2011]. To such a $f$ is asociated a set of $n$ differentiation operators called $\f$-derivations. The Budan-Fourier function $V_f(x)$ counts the sign changes in the sequence of $\f$-derivatives of $f$ evaluated at $x$. The values at which this function jumps are called the $\f$-virtual roots of $f$, these include the real roots of $f$. We also consider the augmented $\f$-virtual roots of $f$ and introduce a genericity property which eases our study and its presentation. We present a fast root isolation method and an algorithm which has been implemented in Maple. We rely on an improved generalized Budan-Fourier count applied to both the input polynomial and its reciprocal, together with Newton-Halley approximation steps. 23. [+] Michael Sagraloff. When Newton meets Descartes: A Simple and Fast Algorithm to Isolate the Real Roots of a Polynomial 24. Abstract: We introduce a novel algorithm denoted NEWDSC to isolate the real roots of a univariate square-free polynomial f with integer coefficients. The algorithm iteratively subdivides an initial interval which is known to contain all real roots of f and performs exact operations on the coefficients of f in each step. For the subdivision strategy, we combine Descartes' Rule of Signs and Newton iteration. More precisely, instead of using a fixed subdivision strategy such as bisection in each iteration, a Newton step based on the number of sign variations for an actual interval is considered, and, only if the Newton step fails, we fall back to bisection. Following this approach, our analysis shows that, for most iterations, quadratic convergence towards the real roots is achieved. In terms of complexity, our method induces a recursion tree of almost optimal size O(n\log(n\tau)), where n denotes the degree of the polynomial and \tau the bitsize of its coefficients. The latter bound constitutes an improvement by a factor of \tau upon all existing subdivision methods for the task of isolating the real roots. In addition, we provide a bit complexity analysis showing that NEWDSC needs only \tilde{O}(n^3\tau) bit operations to isolate all real roots of f. This matches the best bound known for this fundamental problem. However, in comparison to the significantly more involved numerical algorithms by V. Pan and A. Schönhage which achieve the same bit complexity for the task of isolating all complex roots, NEWDSC focuses on real root isolation, is much easier to access and to implement. 25. [+] Pavel Emeliyanenko and Michael Sagraloff. On the Complexity of Solving a Bivariate Polynomial System 26. Abstract: We study the complexity of computing the real solutions of a bivariate polynomial system using the recently presented algorithm BISOLVE~\cite{bes-bisolve-2011}. BISOLVE is a classical elimination method which first projects the solutions of a system onto the x- and y-axes and, then, selects the actual solutions from the so induced candidate set. However, unlike similar algorithms, BISOLVE requires no genericity assumption on the input nor it needs any change of the coordinate system. Furthermore, extensive benchmarks from~\cite{bes-bisolve-2011} confirm that the algorithm outperforms state of the art approaches by a large factor. In this paper, we show that, for two polynomials f,g\in\mathbb{ZZ}[x,y] of total degree at most $n$ with integer coefficients bounded in absolute value by 2^\tau, BISOLVE computes isolating boxes for all real solutions of the system f=g=0 using \Otilde(n^8+n^7\tau) bit operations, thereby improving the previous record bound by four magnitudes. 27. [+] Akos Seress. 2-closed Majorana representations 28. Abstract: The sporadic simple group Monster acts on the Conway-Griess-Norton (CGN) algebra, which is a real algebra $V_M$ of dimension 196,884, equipped with a positive definite scalar product and a bilinear, commutative, and non-associative algebra product. Certain properties of idempotents in $V_M$, that correspond to 2A involutions in the Monster, have been axiomatized by Ivanov as the Majorana representation of the Monster. The axiomatization enables us to talk about Majorana representations of arbitrary groups $G$ that are generated by involutions. In general, a Majorana representation may or may not exist, but if $G$ is isomorphic to a subgroup of the Monster and a representation is isomorphic to the corresponding subalgebra of $V_M$ then we say that the Majorana representation is based on an embedding of $G$ in the Monster. In this paper, we describe a generic theoretical procedure to construct Majorana representations, and a GAP computer program that implements the procedure. It turns out that in many cases the representations are based on embeddings in the Monster, thereby providing a valuable tool of studying subalgebras of the CGN algebra that were unaccessible in the 196,884-dimensional setting. 29. [+] Shaoshi Chen and Manuel Kauers. Order-Degree Curves for Hypergeometric Creative Telescoping 30. Abstract: Creative telescoping applied to a bivariate proper hypergeometric term produces linear recurrence operators with polynomial coeffcients, called telescopers. We provide bounds for the degrees of the polynomials appearing in these operators. Our bounds are expressed as curves in the (r,d)-plane which assign to every order r a bound on the degree d of the telescopers. These curves are hyperbolas, which reflect the phenomenon that higher order telescopers tend to have lower degree, and vice versa. 31. [+] Jean-François Biasse and Claus Fieker. A polynomial time algorithm for computing the HNF of a module over the integers of a number field 32. Abstract: We present a variation of the modular algorithm for computing the pseudo-HNF of an OK-module presented by Cohen, where OK is the ring of integers of a number field K. The modular strategy was conjectured to run in polynomial time by Cohen, but so far, no such proof was available in the literature. In this paper, we provide a new method to prevent the coefficient explosion and we rigorously assess the complexity with respect to the size of the input and the invariants of the field K. 33. [+] Alin Bostan, Frédéric Chyzak, Ziming Li and Bruno Salvy. Fast Computation of Common Left Multiples of Linear Ordinary Differential Operators 34. Abstract: We study tight bounds and fast algorithms for LCLMs of several linear differential operators with polynomial coefficients. We analyse the worst-case arithmetic complexity of existing algorithms for LCLMs, as well as the size of their outputs. We propose a new algorithm that reduces the LCLM computation to a linear algebra problem on a polynomial matrix. The new algorithm yields sharp bounds on the coefficient degrees of the LCLM, improving by two orders of magnitude the previously known bounds. The complexity of the new algorithm is almost optimal, in the sense that it nearly matches the arithmetic size of the output. 35. [+] James McCarron. Small Homogeneous Quandles 36. Abstract: We derive an algorithm for computing all the homogeneous quandles of a given order n provided that a list of the transitive permutation groups of degree n are known. We discuss the implementation of the algorithm, and use it to enumerate the number of isomorphism classes of homogeneous quandles up to order 23 and compute representatives for each class. We also completely determine the homogeneous quandles of prime order. As a by-product, we are able to replicate an earlier calculation of the connected quandles of order at most 30 and, based on this, to compute the number of isomorphism classes of simple quandles to the same order. 37. [+] Mustafa Elsheikh, Mark Giesbrecht, Andy Novocin and B. David Saunders. Fast Computation for Smith Forms of Sparse Matrices Over Local Rings 38. Abstract: We present algorithms to compute the Smith normal form of matrices over two families of local rings. The algorithms use the black box model which is suitable for sparse and structured matrices. The algorithms depend on a number of tools, such as matrix rank computation over finite fields, for which the best-known time- and memory-efficient algorithms are probabilistic. For an n by n matrix A over the ring F[z]/(f^e), where f^e is a power of an irreducible polynomial f in F[z] of degree d, our algorithm requires O(μ*de^2n) operations in F, where our black box is assumed to require O(μ) operations in F to compute a matrix-vector product by a vector over F[z]/(f^e) (and μ is assumed greater than den. The algorithm only requires additional storage for O(den) elements of F. In particular, if μ=O(den), then our algorithm requires only O~(n^2d^2e^3) operations in F, which is an improvement on previous methods for small d and e. For the ring Z/p^eZ, where p is a prime, we give an algorithm which is time- and memory-efficient when the number of nontrivial invariant factors is small. We describe a method for dimension reduction while preserving the invariant factors. The runtime is essentially linear in neμ log(p), where μ is the cost of black-box evaluation (assumed greater than n). To avoid the practical cost of conditioning, we give a Monte Carlo certificate, which at low cost, provides either a high probability of success or a proof of failure. The quest for a time and memory efficient solution without the restriction on number of nontrivial invariant factors remains open. We offer a conjecture which may contribute toward that end. 39. [+] Feng Guo, Erich L. Kaltofen and Lihong Zhi. Certificates of Impossibility of Hilbert-Artin Representations of a Given Degree for Definite Polynomials and Functions 40. Abstract: We deploy numerical semidefinite programming and conversion to exact rational inequalities to certify that for a positive semidefinite input polynomial or rational function, any representation as a fraction of sums-of-squares of polynomials with real coefficients must contain polynomials in the denominator of degree no less than a given input lower bound. By Artin’s solution to Hilbert’s 17th problems, such representations always exist for some denominator degree. Our certificates of infeasibility are based on the generalization of Farkas’ Lemma to semidefinite programming. The literature has many famous examples of impossibility of SOS representability including Motzkin’s, Robinson’s, Choi’s and Lam’s polynomials, and Reznick’s lower degree bounds on uniform denominators, e.g., powers of the sum-of-squares of each variable. Our work on exact certificates for positive semidefiniteness allows for nonuniform denominators, which can have lower degree and are often easier to convert to exact identities. Here we demonstrate our algorithm by computing certificates of impossibilities for an arbitrary sum-of-squares denominator of degree 2 and 4 for some symmetric sextics in 4 and 5 variables, respectively. We can also certify impossibility of base polynomials in the denominator of restricted term structure, for instance as in Landau’s reduction by one less variable. 41. [+] Francesco Biscani. Parallel sparse polynomial multiplication on modern hardware architectures 42. Abstract: We present a high performance algorithm for the parallel multiplication of sparse multivariate polynomials on modern computer architectures. The algorithm is built on three main concepts: a cache-friendly hash table implementation for the storage of polynomial terms in distributed form, a statistical method for the estimation of the size of the multiplication result, and the use of Kronecker substitution as a homomorphic hash function. The algorithm achieves high performance by promoting data access patterns that favour temporal and spatial locality of reference. We present benchmarks comparing our algorithm to routines of other computer algebra systems, both in sequential and parallel mode. 43. [+] Jérémy Berthomieu and Romain Lebreton. Relaxed p-adic Hensel lifting for algebraic systems 44. Abstract: In a previous article, an implementation of lazy p-adic integers with a multiplication of quasi-linear complexity, the so-called relaxed product, was presented. Given a ring R and an element p in R, we design a relaxed Hensel lifting for algebraic systems from R/(p) to the p-adic completion R_p of R. Thus, any root of linear and algebraic regular systems can be lifted with a quasi-optimal complexity. We report our implementations in C++ within the computer algebra system Mathemagix and compare them with Newton operator. As an application, we solve linear systems over the integers and compare the running times with Linbox and IML. 45. [+] Sergei Abramov and Denis Khmelnov. On valuations of meromorphic solutions of arbitrary-order linear difference systems with polynomial coefficients 46. Abstract: Algorithms for computing lower bounds on valuations (e.g., orders of the poles) of the components of meromorphic solutions of arbitrary-order linear difference systems with polynomial coefficients are considered. In addition to algorithms based on ideas which have been already utilized in computer algebra for treating normal first-order systems, a new algorithm using "tropical" calculations is proposed. It is shown that the latter algorithm is rather fast, and produces the bounds with good accuracy. 47. [+] Adam Strzeboński and Elias Tsigaridas. Univariate real root isolation in multiple extension fields 48. Abstract: We present algorithmic, complexity and implementation results for the problem of isolating the real roots of a univariate polynomial in $\Ba \in L[y]$, where $L=\QQ(\alpha_1, \dots, \alpha_{\ell})$ is an algebraic extension of the rational numbers. Our bounds are single exponential in $\ell$ and match the ones presented in \cite{st-issac-2011} for the case $\ell=1$. We consider two approaches. The first, indirect approach, using multivariate resultants, computes a univariate polynomial with integer coefficients, among the real roots of which are the real roots of $\Ba$. The Boolean complexity of this approach is $\sOB(N^{4\ell+4})$, where $N$ is the maximum of the degrees and the coefficient bitsize of the involved polynomials. The second, direct approach, tries to solve the polynomial directly, without reducing the problem to a univariate one. We present an algorithm that generalizes Sturm algorithm from the univariate case, and modified versions of well known solvers that are either numerical or based on Descartes' rule of sign. We achieve a Boolean complexity of $\sOB(\min\set{N^{4\ell + 7},N^{2\ell^2+6}})$ and $\sOB( \max\set{N^{\ell+5}, N^{2\ell+3}})$, respectively. We implemented the algorithms in \func{C} as part of the core library of MATHEMATICA and we illustrate their efficiency over various data sets. 49. [+] Toshinori Oaku. An algorithm to compute the differential equations for the logarithm of a polynomial 50. Abstract: We present an algorithm to compute the annihilator of (i.e., the linear differential equations for) the multi-valued analytic function $f^\lambda(\log f)^m$ in the ring $D_n$ of differential operators for a given non-constant polynomial $f$, a non-negative integer $m$, and a complex number $\lambda$. This algorithm consists in the differentiation with respect to $s$ of the annihilator of $f^s$ in the ring $D_n[s]$ and ideal quotient computation in $D_n$. The obtained differential equations constitute what is called a holonomic system in $D$-module theory. Hence combined with the integration algorithm for $D$-modules, this enables us to compute a holonomic system for the integral of a function involving the logarithm of a polynomial with respect to some variables. 51. [+] Moulay Barkatou, Thomas Cluzeau, Carole El Bacha and Jacques-Arthur Weil. Computing Closed Form Solutions of  Integrable Connections 52. Abstract: We present algorithms for computing rational and hyperexponential solutions of linear $D$-finite partial differential systems written as integrable connections. We show that these types of solutions can be computed recursively by adapting existing algorithms handling ordinary linear differential systems. We provide an arithmetic complexity analysis of the algorithms that we develop. A Maple implementation is available and some examples and applications are given. 53. [+] Romain Lebreton and Éric Schost. Algorithms for the universal decomposition algebra 54. Abstract: Let k be a field and let f be a polynomial of degree n in k [T]. The universal decomposition algebra A is the quotient of k [X_1, ..., X_n] by the ideal of symmetric relations (those polynomials that vanish on all permutations of the roots of f). We show how to obtain efficient algorithms to compute in A. We use a univariate representation of A, i.e. an isomorphism of the form A = k [T]/Q(T), since in this representation, arithmetic operations in A are known to be quasi-optimal. We give details for two related algorithms, to find the isomorphism above, and to compute the characteristic polynomial of any element of A. 55. [+] Alin Bostan, Muhammad F. I. Chowdhury, Romain Lebreton, Bruno Salvy and Éric Schost. Power Series Solutions of Singular (q)−Differential Equations 56. Abstract: We provide algorithms computing power series solutions of a large class of differential or q-differential equations. Their number of arithmetic operations grows linearly with the precision, up to logarithmic terms. 57. [+] Paolo Lella. An efficient implementation of the algorithm computing the Borel-fixed points of a Hilbert scheme 58. Abstract: Borel-fixed ideals play a key role in the study of Hilbert schemes. Indeed each component and each intersection of components of a Hilbert scheme contains at least one Borel-fixed point, i.e. a point corresponding to a subscheme defined by a Borel-fixed ideal. Moreover Borel-fixed ideals have good combinatorial properties, which make them very interesting in an algorithmic perspective. In this paper, we propose an implementation of the algorithm computing all the saturated Borel-fixed ideals with number of variables and Hilbert polynomial assigned, introduced from a theoretical point of view in the paper "Segment ideals and Hilbert schemes of points", Discrete Mathematics 311 (2011). 59. [+] Stavros Garoufalidis and Christoph Koutschan. Twisting q-holonomic sequences by complex roots of unity 60. Abstract: A sequence $f_n(q)$ is $q$-holonomic if it satisfies a nontrivial linear recurrence with coefficients polynomials in $q$ and $q^n$. Our main theorem states that $q$-holonomicity is preserved under twisting, i.e., replacing $q$ by $\omega q$ where $\omega$ is a complex root of unity. Our proof is constructive, works in the multivariate setting of $\partial$-finite sequences and is implemented in the Mathematica package HolonomicFunctions. Our results are illustrated by twisting natural $q$-holonomic sequences which appear in quantum topology, namely the colored Jones polynomial of pretzel knots and twist knots. The recurrence of the twisted colored Jones polynomial can be used to compute the asymptotics of the Kashaev invariant of a knot at an arbitrary complex root of unity. 61. [+] Jules Svartz and Jean-Charles Faugère. Solving Polynomial Systems Globally Invariant Under an Action of the Symmetric Group and Application to the Equilibria of N vortices in the Plane. 62. Abstract: \begin{abstract} We propose an efficient algorithm to solve polynomial systems of which equations are \emph{globally} invariant under an action of the symmetric group $$\mathfrak{S}_N$$ where it acts on the variable $$x_{i}$$ where the number of variables is a multiple of $$N$$. For instance, we can assume that swapping two variables (or two pairs of variables) in one equation give rise to another equation of the system (perhaps changing the sign). The idea is to apply many times divided difference operators to the original system in order to derive a new system of equations involving only the symmetric functions of a subset of the variables. The next step is to solve the system using Gröbner techniques; this is usually several order faster than computing the Gröbner basis of the original system since the number of solutions of the corresponding ideal has been divided by at least $$N!$$. To illustrate the algorithm and to demonstrate its efficiency, we apply the method to a well known physical problem called equilibria positions of vortices. This problem has been studied for almost 150 years and goes back to work by Lord Kelvin. Assuming that all vortices have same vorticity, the problem can be reformulated as a system polynomial equations invariant under an action of $\mathfrak{S}_N$. Using numerical methods, physicists have been able to compute solutions up to $N\leq 7$ but it was an open challenge to check whether the set of solution is complete. Direct naive approach of Gröbner bases techniques give rise to hard-to-solve polynomial system: for instance, when $$N=5$$, it take several hours to compute the Gröbner basis and the number of solutions is $$2060$$. By contrast, applying the new algorithm to the same problem give rise to a system of $$17$$ solutions that can be solved in less than $$0.1$$ sec. Moreover, we are able to compute \emph{all} equilibria when $$N\leq8$$ (the case $$N=8$$ being completely new). \end{abstract} 63. [+] Bjarke Hammersholt Roune and Michael Stillman. Practical Groebner Basis Computation 64. Abstract: We report on our experiences exploring state of the art Groebner basis computation. We investigate signature based algorithms in detail. We also introduce new practical data structures and computational techniques for use in both signature based Groebner basis algorithms and more traditional variations of the classic Buchberger algorithm. Our conclusions are based on experiments using our new freely available open source standalone C++ library. 65. [+] Olivier Bournez, Daniel Graça and Amaury Pouly. On the complexity of solving polynomial initial value problems 66. Abstract: In this paper we prove that computing the solution to a initial-value problem of the form $\dot{y}=p(y)$ with initial condition $y(t_0)=y_0\in\R^d$ at time $t_0+T$ with precision $e^{-\mu}$ where $p$ is a vector of polynomial can be done in time polynomial in the value of $T$, $\mu$ and $Y=\sup_{t_0\leqslant u\leqslant T}\infnorm{y(u)}$. Contrary to existing results, our algorithm works for any vector of polynomial $p$ over any bounded or unbounded domain and has a guaranteed complexity and precision. In particular we do not assume $p$ to be fixed, or the solution to lie in a compact domain, nor we assume that $p$ has a Lipschitz constant. 67. [+] Joris van der Hoeven and Gregoire Lecerf. On the complexity of multivariate blockwise polynomial multiplication 68. Abstract: In this article, we study the problem of multiplying two multivariate polynomials which are somewhat but not too sparse, typically like polynomials with convex supports. We design and analyze an algorithm which is based on blockwise decomposition of the input polynomials, and which performs the actual multiplication in an FFT model or some other more general so called "evaluated model". If the input polynomials have total degrees at most d, then, under mild assumptions on the coefficient ring, we show that their product can be computed with O(s^1.5337) ring operations, where s denotes the number of all the monomials of total degree at most 2d. 69. [+] Jean-Charles Faugère, Mohab Safey El Din and Pierre-Jean Spaenlehauer. Critical Points and Grobner Bases: the Unmixed Case 70. Abstract: We consider the problem of computing critical points of the restriction of a polynomial map to an algebraic variety. This is of first importance since the global minimum of such a map is reached at a critical point. Thus, these points appear naturally in non-convex polynomial optimization which occurs in a wide range of scientific applications (control theory, chemistry, economics,...). Critical points also play a central role in recent algorithms of effective real algebraic geometry. Experimentally, it has been observed that Gröbner basis algorithms are efficient to compute such points. Therefore, recent software based on the so-called Critical Point Method are built on Gröbner bases engines. Let $f_1, \ldots, f_p$ be polynomials in $\Q[x_1, \ldots, x_n]$ of degree $D$, $V\subset\C^n$ be their complex variety and $\pi_1$ be the projection map $(x_1,\ldots, x_n)\mapsto x_1$. The critical points of the restriction of $\pi_1$ to $V$ are defined by the vanishing of $f_1, \ldots, f_p$ and some maximal minors of the Jacobian matrix associated to $f_1, \ldots, f_p$. Such a system is algebraically structured: the ideal it generates is the sum of a determinantal ideal and the ideal generated by $f_1,\ldots, f_p$. We provide the first complexity estimates on the computation of Gröbner bases of such systems defining critical points. We prove that under genericity assumptions on $f_1,\ldots, f_p$, the complexity is polynomial in the generic number of critical points, i.e. $D^p(D-1)^{n-p}{{n-1}\choose{p-1}}$. More particularly, in the quadratic case $D=2$, the complexity of such a Gröbner basis computation is polynomial in the number of variables $n$ and exponential in $p$. We also give experimental evidence supporting these theoretical results. 71. [+] Philippe Trebuchet and Bernard Mourrain. Border basis representation of general quotient algebra 72. Abstract: In this paper, we generalized the construction of border bases to non-zero dimensional ideals for normal forms compatible with the degree, tackling the remaining obstacle for a general application of border basis methods. First, we give conditions to have a border basis up to a given degree. Next, we describe a new stopping criteria to determined when the reduction with respect to the leading terms is a normal form. This test based on the persistence and regularity theorems of Gotzmann yields a new algorithm for computing a border basis of any ideal, which proceeds incrementally degree by degree until its regularity. We detail it, prove its correctness, present its implementation and report some experimentations which illustrates its practical good behavior. 73. [+] Danko Adrovic and Jan Verschelde. Computing Puiseux Series for Algebraic Surfaces 74. Abstract: In this paper we outline an algorithmic approach to compute Puiseux series expansions for algebraic surfaces. The series expansions originate at the intersection of the surface with as many coordinate planes as the dimension of the surface. Our approach starts with a polyhedral method to compute cones of normal vectors to the Newton polytopes of the given polynomial system that defines the surface. If as many vectors in the cone as the dimension of the surface define an initial form system that has isolated solutions, then those vectors are potential tropisms for the initial term of the Puiseux series expansion. Our preliminary methods produce exact representations for solution sets of the cyclic $n$-roots problem, for $n = m^2$, corresponding to a result of Backelin. 75. [+] Colton Pauderis and Arne Storjohann. Deterministic unimodularity certification 76. Abstract: The asymptotically fastest algorithms for many linear algebra problems on integer matrices, including solving a system of linear equations and computing the determinant, use high-order lifting. Currently, high-order lifting requires the use of a randomized shifted number system to detect and avoid error-producing carries. By interleaving quadratic and linear lifting, we devise a new algorithm for high-order lifting that allows us to work in the usual symmetric range modulo $p$, thus avoiding randomization. As an application, we give a deterministic algorithm to assay if an $n \times n$ integer matrix $A$ is unimodular. The cost of the algorithm is $O((\log n) n^{\omega}\, \M(\log n + \log ||A||))$ bit operations, where $||A||$ denotes the largest entry in absolute value, and $\M(t)$ is the cost multiplying two integers bounded in bit length by $t$. 77. [+] Moulay A. Barkatou and Clemens G. Raab. Solving Linear Ordinary Differential Systems in Hyperexponential Extensions 78. Abstract: Let F be a differential field generated from the rational functions over some constant field by one hyperexponential extension. We present an algorithm to compute the solutions in F^n of systems of n first order linear ODEs. Solutions in F of a scalar ODE of higher order can be determined by an algorithm of Bronstein and Fredet. Our approach avoids reduction to the scalar case. We also give examples how this can be applied to integration. 79. [+] Ambros Gleixner, Dan Steffy and Kati Wolter. Improving the Accuracy of Linear Programming Solvers with Iterative Refinement 80. Abstract: We describe an iterative refinement procedure for computing extended precision or exact solutions to linear programming problems (LPs). Arbitrarily precise solutions can be computed by solving a sequence of closely related LPs with limited precision arithmetic. The LPs solved at iterations of this algorithm share the same constraint matrix as the original problem instance and are transformed only by modification of the objective function, right-hand side, and variable bounds. Exact computation is used to compute and store the exact representation of the transformed problems, while numeric computation is used for computing approximate LP solutions and applying iterations of the simplex algorithm. At all steps of the algorithm the LP bases encountered in the transformed problems correspond directly to LP bases in the original problem description. We demonstrate that this algorithm is effective in practice for computing extended precision solutions and that this leads to direct improvement of the best known methods for solving LPs exactly over the rational numbers. A proof-of-concept implementation is done within the SoPlex LP solver. 81. [+] Luk Bettale, Jean-Charles Faugère and Ludovic Perret. Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach 82. Abstract: The Polynomial System Solving (PoSSo) problem is a fundamental NP-Hard problem in computer algebra. Among many others, PoSSo have applications in area such as coding theory and cryptology. Typically, the security of cryptographic multivariate public-key schemes (MPKC) such as the UOV cryptosystem of Kipnis, Shamir and Patarin is directly related to the hardness of PoSSo over finite fields. The goal of this paper is to further understand the influence of finite fields on the hardness of PoSSo. To this end, we consider the so-called {\it hybrid approach}. This is a polynomial system solving method dedicated to finite fields proposed by Bettale, Faug\`ere and Perret (Journal of Mathematical Cryptography, 2009). The idea is to combine exhaustive search with Gröbner bases. The efficiency of the hybrid approach is related to the choice of a trade-off between the two methods. We propose here an improved complexity analysis dedicated to quadratic systems. Whilst the principle of the hybrid approach is simple, its careful analysis leads to rather surprising and somehow unexpected results. We first prove that the best trade-off (i.e. number of variables to be fixed) allowing to minimize the complexity is achieved by fixing a number of variables proportional to the number of variables $n$ of the system considered. Under some natural algebraic assumption, we then show that the asymptotic complexity of the hybrid approach is $2^{(3.31-3.62\,\log_2\left(q\right)^{-1})\, n}$, where $q$ is the size of the field (under the condition in particular that $\log(q)\ll n$). This is to date, the best complexity for solving PoSSo over finite fields (when $q>2$). Indeed, we have been able to quantify the gain provided by the hybrid approach compared to a direct Gröbner basis method. For quadratic systems, we show (assuming a natural algebraic assumption) that this gain is exponential in the number of variables. Asymptotically, the gain is $2^{1.49\,n}$ when both $n$ and $q$ grow to infinity and $\log(q)\ll n$. 83. [+] Matthew Comer, Erich Kaltofen and Clément Pernet. Sparse Polynomial Interpolation and Berlekamp/Massey Algorithms That Correct Outlier Errors in Input Values 84. Abstract: We propose algorithms performing sparse interpolation with errors, based on Prony's / Ben-Or's & Tiwari's algorithm, using a Berlekamp/Massey algorithm with early termination. First, we give a randomized algorithm that can determine a t -sparse polynomial f, where f has exactly t non-zero terms, from a bound T >= t and a sequence of N= (2T+1)(e+1) evaluations f(p^i), where i=1,2,3,...,N and p a field element, in the presence of <= e wrong evaluations in the sequence, that are spoiled either with random or misleading errors. We also investigate the problem of recovering the minimal linear generator from a sequence of field elements that are linearly generated but where again <= e elements are erroneous. We show that there exist sequences of < 2t(2e+1) elements, such that two distinct generators of length t satisfy the linear recurrence up to <= e faults, at least if the field has a characteristic unequal 2. Uniqueness can be proven (for any field characteristic) for length >= 2t(2e+1) of the sequence with <= e errors. Finally, we present the Majority Rule Berlekamp/Massey algorithm, which can recover the unique minimal linear generator of degree t when given bounds T >= t and E >= e and the initial sequence segment of 2T(2E+1) elements. The latter yields a unique sparse interpolant for the first problem. This research is motivated by the sparse interpolation algorithms with numeric noise, into which we now can bring outlier errors in the values. 85. [+] Masao Ishikawa and Christoph Koutschan. Zeilberger's Holonomic Ansatz for Pfaffians 86. Abstract: A variation of Zeilberger's holonomic ansatz for symbolic determinant evaluations is proposed which is tailored to deal with Pfaffians. The method is also applicable to determinants of skew-symmetric matrices, for which the original approach does not work. As Zeilberger's approach is based on the Laplace expansion (cofactor expansion) of the determinant, we derive our approach from the cofactor expansion of the Pfaffian. To demonstrate the power of our method, we prove, using computer algebra algorithms, some conjectures proposed in the paper "Pfaffian decomposition and a Pfaffian analogue of $q$-Catalan Hankel determinants" by Ishikawa, Tagawa, and Zeng. A minor summation formula related to partitions and Motzkin paths follows as a corollary. 87. [+] Raoul Blankertz, Joachim von Zur Gathen and Konstantin Ziegler. Compositions and collisions at degree $p^2$ 88. Abstract: A univariate polynomial $f$ over a field is decomposable if $f= g \circ h= g(h)$ for nonlinear polynomials $g$ and $h$. In order to count the decomposables, one has to know the number of equal-degree collisions, that is $f = g \circ h = g^* \circ h^*$ with $(g,h) \neq (g^{*}, h^{*})$ and $\deg g = \deg g^*$. Such collisions only occur in the wild case, where the field characteristic $p$ divides $\deg f$. Reasonable bounds on the number of decomposables over a finite field are known, but they are less sharp in the wild case, in particular for degree $p^2$. We provide a classification of all polynomials of degree $p^2$ with a collision. This yields the exact number of decomposable polynomials of degree $p^{2}$ over a finite field of characteristic $p$. We also present an algorithm that determines whether a given polynomial of degree $p^{2}$ has a collision or not. 89. [+] Yue Ma and Lihong Zhi. Computing Real Solutions of Polynomial Systems via Low-Rank Moment Matrix Completion 90. Abstract: In this paper, we propose a new algorithm for computing real roots of polynomial equations or a subset of real roots in a given semi-algebraic set described by additional polynomial inequalities. The algorithm is based on using modified fixed point continuation method for solving Lasserre's hierarchy of moment relaxations. We establish convergence properties for our algorithm. For a large-scale polynomial system with only few real solutions in a given area, we can extract them quickly. Moreover, for a polynomial system with an infinite number of real solutions, our algorithm can also be used to find some isolated real solutions or real solutions on the manifolds. 91. [+] Vlad Slavici, Daniel Kunkle, Gene Cooperman and Stephen Linton. An Efficient Programming Model for Memory-Intensive Recursive Algorithms using Parallel Disks 92. Abstract: In order to keep up with the demand for solutions to problems with ever-increasing data sets, both academia and industry have embraced commodity computer clusters with locally attached disks or SANs as an inexpensive alternative to supercomputers. With the advent of tools for parallel disks programming, such as MapReduce, STXXL and Roomy --- that allow the developer to focus on higher-level algorithms --- the programmer productivity for memory-intensive programs has increased many-fold. However, such parallel tools were primarily targeted at iterative programs. We propose a programming model for migrating recursive RAM-based legacy algorithms to parallel disks. Many memory-intensive symbolic algebra algorithms are most easily expressed as recursive algorithms. In this case, the programming challenge is multiplied, since the developer must re-structure such an algorithm with two criteria in mind: converting a naturally recursive algorithm into an iterative algorithm, while simultaneously exposing any potential data parallelism (as needed for parallel disks). This model alleviates the large effort going into the design phase of an external memory algorithm. Research in this area over the past 10 years has focused on per-problem solutions, without providing much insight into the connection between legacy algorithms and out-of-core algorithms. Our method shows how legacy algorithms employing recursion and non-streaming memory access can be more easily translated into efficient parallel disk-based algorithms. We demonstrate the ideas on a largest computation of its kind: the determinization via subset construction and minimization of very large nondeterministic finite set automata (NFA). To our knowledge, this is the largest subset construction reported in the literature. Determinization for large NFA has long been a large computational hurdle in the study of permutation classes defined by token passing networks. The programming model was used to design and implement an efficient NFA determinization algorithm that solves the next stage in analyzing token passing networks representing two stacks in series.
2017-04-29 11:15:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8230288624763489, "perplexity": 544.28846598405}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.68/warc/CC-MAIN-20170423031203-00151-ip-10-145-167-34.ec2.internal.warc.gz"}
https://deepai.org/publication/constant-factor-approximation-for-tracking-paths-and-fault-tolerant-feedback-vertex-set
# Constant Factor Approximation for Tracking Paths and Fault Tolerant Feedback Vertex Set Consider a vertex-weighted graph G with a source s and a target t. Tracking Paths requires finding a minimum weight set of vertices (trackers) such that the sequence of trackers in each path from s to t is unique. In this work, we derive a factor 66-approximation algorithm for Tracking Paths in weighted graphs and a factor 4-approximation algorithm if the input is unweighted. This is the first constant factor approximation for this problem. While doing so, we also study approximation of the closely related r-Fault Tolerant Feedback Vertex Set problem. There, for a fixed integer r and a given vertex-weighted graph G, the task is to find a minimum weight set of vertices intersecting every cycle of G in at least r+1 vertices. We give a factor 𝒪(r^2) approximation algorithm for r-Fault Tolerant Feedback Vertex Set if r is a constant. ## Authors • 6 publications • 7 publications • 23 publications • 2 publications • 10 publications • 4 publications • ### A 2-Approximation Algorithm for Feedback Vertex Set in Tournaments A tournament is a directed graph T such that every pair of vertices is ... 09/22/2018 ∙ by Daniel Lokshtanov, et al. ∙ 0 • ### Towards constant-factor approximation for chordal / distance-hereditary vertex deletion For a family of graphs ℱ, Weighted ℱ-Deletion is the problem for which t... 09/02/2020 ∙ by Jungho Ahn, et al. ∙ 0 • ### On Fault Tolerant Feedback Vertex Set The study of fault-tolerant data structures for various network design p... 09/13/2020 ∙ by Pranabendu Misra, et al. ∙ 0 • ### Fault-Tolerant Path-Embedding of Twisted Hypercube-Like Networks THLNs The twisted hypercube-like networks(THLNs) contain several important hyp... 06/12/2019 ∙ by Huifeng Zhang, et al. ∙ 0 • ### An Approximation Algorithm for Computing Shortest Paths in Weighted 3-d Domains We present the first polynomial time approximation algorithm for computi... 02/15/2011 ∙ by Lyudmil Aleksandrov, et al. ∙ 0 • ### Approximation Algorithms for the A Priori TravelingRepairman We consider the a priori traveling repairman problem, which is a stochas... 01/19/2019 ∙ by Inge Li Gørtz, et al. ∙ 0 • ### The K-Centre Problem for Necklaces In graph theory, the objective of the k-centre problem is to find a set ... 05/20/2020 ∙ by Duncan Adamson, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction In this paper, we study the Tracking Paths problem, which involves finding a minimum weight set of vertices in a vertex-weighted simple graph  that can track moving objects in a network along the way from a source  to a target . A set of vertices  is a tracking set if for any (simple) - path  in the sequence of the subset of vertices from  that appear in , in their order along , uniquely identifies the path . That is, if holds for any two distinct - paths and . Formally, the problem is the following. Tracking Paths Input: Undirected graph with positive integer weights , a source , and a target . Output: A minimum weight tracking set . In the current age of information, social media networks have an important role in information exchange and dissemination. However, due to the unregulated nature of this exchange, spreading of rumours and fake news pose serious challenges in terms of authenticity of information [ChierichettiLP11, rumour-spreading]. Identifying and studying patterns of rumor spreading in social media poses a lot of challenges due to huge amounts of data in constant movement in large networks [rumour-det]. Tracing the sequence of channels (people, agents, …) through which rumors spread can make it easier to contain the spread of such unwanted messages [rumour-bigdata, rumour-pattern]. A basic approach would require tracing the complete route traversed by each message in the network. Here an optimum tracking set can serve as a resource-efficient solution for tracing the spread of rumors and dissolving them. Furthermore, Tracking Paths finds applications in tracking traffic movement in transport networks and tracing object movement in wireless sensor networks [sensor-tracking, localization]. The graph theoretic version of the problem was introduced by Banik et al. [BanikKPS20], wherein the authors studied the unweighted (i.e., for all ) shortest path variant of the problem, namely Tracking Shortest Paths (i.e., the set  is required to uniquely identify each of the shortest - paths). They showed that this problem is NP-hard, even to approximate it within a factor of . They also show that Tracking Shortest Paths admits a -approximation algorithm for planar graphs. Later parameterized complexity of Tracking Paths was studied in [tr-j], where the problem was also proven to be NP-hard. To the best of our knowledge, Eppstein et al. [ep-planar] were the first to study approximation algorithms for the unweighted Tracking Paths. They gave a -approximation algorithm when the input graph is planar. Recently this result was extended by Goodrich et al. [ep-log] to a -approximation algorithm for -minor free graphs and a -approximation algorithm for general (unweighted) graphs [ep-log]. They also gave a -approximation algorithm for Tracking Paths. The existence of constant factor application algorithm was posed as an open problem by Eppstein et al. [ep-planar]. In this paper we answer this affirmatively. ###### Theorem 1.1 There is a -approximation algorithm for Tracking Paths in weighted graphs and a -approximation algorithm if the input is unweighted. There exists an interesting connection between Feedback Vertex Set (FVS) and Tracking Paths. Before we discuss this in more detail, we introduce FVS and its fault tolerant variant. Formally, for a given vertex-weighted graph , FVS requires finding a minimum weight set of vertices , referred to as a feedback vertex set (fvs), such that the graph induced by the vertex set does not have any cycles. FVS is a classical NP-hard problem [Karp72] that has been thoroughly studied in graph theory. An -fault tolerant feedback vertex set (-ftfvs) is a set of vertices that intersect with each cycle in the graph in at least vertices; finding a minimum weight -ftfvs is the -Fault Tolerant Feedback Vertex Set problem [pm-1fvs]. Note that if a graph has a cycle of length less than or equal to , then it cannot have an -ftfvs. #### Relation Between Fvs and Tracking Paths. For a graph with source and destination , if each vertex and edge participates in an - path, then we refer to as a preprocessed graph. It is known that in a preprocessed graph, a tracking set is also a feedback vertex set [tr-j]. Thus, the weight of a minimum feedback vertex set serves as a lower bound for the weight of a tracking set in preprocessed graphs. This lower bound has proven to be helpful in the analysis of Tracking Paths. However, approximating Tracking Paths has been challenging since the size of a tracking set can be arbitrarily larger than that of a minimum fvs. Further, it is known [iwoca, ep-planar] that in a graph , if a set of vertices contains at least three vertices from each cycle in , then is a tracking set for . Thus, a -fault tolerant feedback vertex set is also a tracking set. In this paper, we borrow inspiration from this concept to derive a polynomial time algorithm to compute an approximate tracking set. In particular, we start with finding an fvs for the input graph , and then identify cycles that need more vertices as trackers additional to the ones selected as feedback vertices. Observe that a feedback vertex set is indeed a -fault tolerant fvs. Misra [pm-1fvs] gave a -approximation algorithm for the problem of finding a -fault tolerant fvs in unweighted graphs and -approximation algorithm for weighted graphs. In this paper, we give an approximation algorithm for finding an -fault tolerant feedback vertex set, where is a constant. We do this by using the Multicut in Forests problem (see Section 2) as an auxiliary problem. Misra [pm-1fvs] pointed out that the complexity of -Fault Tolerant Feedback Vertex Set is not known for  and asked for an approximation algorithm. ###### Theorem 1.2 There is an -approximation algorithm for -Fault Tolerant Feedback Vertex Set in weighted graphs, where is a constant. It is worth mentioning that our approach relies on explicit enumeration of certain cycles in the input graph . This can be done in polynomial time if is a constant (see creftypecap 3.2). Thus, it remains open how to approximate the -Fault Tolerant Feedback Vertex Set if depends on the size of the input (e.g., , where is the number of vertices in ). #### Motivation for (Fault Tolerant) Fvs. The FVS problem is motivated by applications in deadlock recovery [GardarinS76, SiberschatzG93] [Bar-YehudaGNR98], VLSI design [HudliH94], and other areas. Fault tolerant solutions are crucial to real world applications that are prone to failure of nodes in a network or entities in a system [Parter16]. In the case of FVS, the failure corresponds to not being able to eliminate the node from the network. #### Related work. There has been a lot of heuristic based work on the problem of tracking moving objects in a network [network-tracking, tracking-info, ZhouM19]. Parameterized complexity of Tracking Shortest Paths and Tracking Paths was studied in [tr-j, tr1-j, BiloGLP20, quad, struct-tp, ep-planar]. Feedback Vertex Set is known to admit a -approximation algorithm which is tight under UGC [BafnaBF99, ChudakGHW98]. The best known parameterized algorithm for FVS runs in time, where is the size of the solution [LiN20]. It is worth noting that Misra [pm-1fvs] uses Multicut in Forests as a subroutine as well. The edge version is known to admit an LP formulation whose matrix is totally unimodular [GolovinNS06] if the family of paths is non-crossing; thus, it is solvable in polynomial time. A related problem is the -Hurdle Multiway Cut for which a factor -approximation algorithm is known (and it is again tight under UGC) [DeanGPW11]. #### Preliminaries and Notations. We refer to [diestel] for the standard graph theory terminology. All paths we consider are simple. For a graph , we use to denote its vertex set. For a set we use to denote a graph that results from the deletion of vertex set and the edges incident to . For a weight function , let for denote the sum of respective elements, i.e., . An unweighted version of the problem is obtained by assigning all vertices the same weight. In fact, we can also omit the weights in this case. We write vectors in boldface and their entries in normal font, i.e., is the third entry of a vector . If we apply minimum to two vectors, then it is applied entry-wise. We write for the vector of ones; we omit the superscript if the dimension is clear from the context. ## 2 Vertex Multicut in Forests In this section, we gather polynomial time (approximation) algorithms for solving Vertex Multicut in Forests. The algorithms are used later in the subsequent sections to derive the approximation algorithms for -Fault Tolerant FVS and Tracking Paths. Multicut in Forests (MCF) Input: A forest , weight function , terminal pairs . Output: A minimum weight set of vertices such that in the graph  there is no path between and  for . In this work, we consider the Unrestricted version of Multicut in Forests i.e., a solution set is allowed to contain vertices from the terminal pairs. We will consider a set of paths instead of a set of terminal pairs. It is not hard to see that these two versions are equivalent on forests; if the terminals in a pair belong to different trees we can discard the pair since there does not exist any path between them, otherwise there exists a unique path between each pair of terminals, which the solution  should intersect. While MCF is NP-hard [CalinescuFR03], the unweighted version is polynomial time solvable [CalinescuFR03, GuoHKNU08]. However, recently the following strong version of constant factor approximability of the problem was shown (in fact, the result holds for all chordal graphs, not just for forests). Consider the natural ILP formulation of the problem, where we introduce a binary variable for each vertex describing, whether the corresponding vertex should or should not be taken into a solution. At least one vertex must be taken from each path from to make a feasible solution. Consider the following LP relaxation of the natural ILP. Minimize ∑v∈Vw(v)⋅xv (LPMCF) subject to ∑v∈V(P)xv ≥1 ∀P∈P 0≤xv ≤1 ∀v∈V. ###### Proposition 1 (Agrawal et al. [AgrawalLMSZ20, Lemma 5.1]) For a given instance of Multicut in Forests one can find a solution such that in polynomial time, where is the objective value of an optimal solution to the corresponding (LP). We also need the following result of similar nature for unweighted forests, strenghtening the polynomial time solvability of the problem. ###### Lemma 1 If all the weights are equal, then there is an integral optimal solution for (LP). Furthermore, such a solution can be found in polynomial time. ###### Proof It is enough to show the lemma for each tree of separately. Root the tree in . Among all optimal solutions to the LP, consider the solution  that minimizes . Assume, for the sake of contradiction that  is not integral. Let be a vertex with and the maximum distance from the root . Note that, in particular, for all descendants  of  it holds that . Let us first assume that and let be the parent of . We define as ˆxv=⎧⎨⎩0if v=u,min{xp+xu,1}if v=p,xvotherwise. We claim that is a solution to (LP). Obviously, for every . Let . If , then . If , then either the path is fully contained in the subtree of rooted in , or . In the first case, we have . Since all the summands on the left hand side are integral by the choice of , it follows that . In the later case, if , then and otherwise . Hence, is a solution to (LP). Furthermore, we have that (since all the weights are equal) and which is a contradiction. The case can be proved using a similar argument for such that and if . The second part of the lemma follows from polynomial time algorithm for MCF [CalinescuFR03], since any optimal solution to the instance of MCF represents an optimal solution for the corresponding (LP) by the first part of the lemma. For the rest of the paper we use to denote the best LP relative approximation ratio achievable for MCF in the sense of Proposition 1 and Lemma 1. That is for weighted instances and for unweighted instances. ## 3 Approximate r-Fault Tolerant Feedback Vertex Set In this section, we give an algorithm for computing an approximate -fault tolerant feedback vertex set for undirected weighted graphs, for any fixed integer . Recall that an -fault tolerant feedback vertex set is a set of vertices that contains at least vertices from each cycle in the graph. A polynomial time algorithm for computing a constant factor approximate -fault tolerant feedback vertex set was given by Misra [pm-1fvs]. The factor can be easily observed to be , where is the best possible approximation ratio for MCF. We start in Section 3.2 by giving an algorithm for finding -fault tolerant feedback vertex set. Later, in Section 3.3, we show how this technique can be generalized to give an -fault tolerant feedback vertex set for any . ### 3.1 Hardness of r-Fault Tolerant Feedback Vertex Set ###### Lemma 2 For any fixed , it is NP-hard to decide whether for a given graph and an integer , contains an -fault tolerant feedback vertex set of size at most . ###### Proof The case of is equivalent to finding a feedback vertex set of size at most , which is a well known NP-complete problem [karp1972reducibility]. The case of was shown by Misra [pm-1fvs]. We extend his approach for . We will show a reduction from Vertex Cover, which is a well known NP-complete problem [karp1972reducibility]. Let  be an instance of Vertex Cover and let be a graph constructed from in the following way. First replace every edge of by a path with vertices . Additionally create two paths on vertices: on and on . We add edges , , , . Finally add a cycle with new vertices and connect to every vertex originally in . Two examples for a single edge can be seen in Figure 1. If has vertices and edges, then this process creates new vertices and new edges. As is fixed, the total size of the construction is bounded by , therefore is polynomial in the size of the input and clearly can be constructed in linear time. Let . We will now show that every cycle in has size at least and therefore the instance of -Fault Tolerant Feedback Vertex Set is not a trivial NO-instance. Let be the set of vertices added to . We will first consider cycles with vertices only in . Let be such a cycle. Either is or there exists such that contains only vertices of . The second case leads to only two possible cycles. The first cycle consists of vertices , the other one consists of vertices . The first cycle has size , the other one . Let us now consider all cycles containing some vertex from and excluding . Let , then as there must exist a cycle in such that . Also note that in , the distance between any two vertices such that is exactly . Therefore the length of is at least . Now any other cycle containing and some vertex in must contain at least two vertices from , therefore the size of is at least . Therefore every cycle in has size at least . We now show that has a vertex cover of size at most if and only if has an -fault tolerant feedback vertex set of size at most . Suppose that is a vertex cover of of size . Then we show that is an -fault tolerant feedback vertex set on . Note that . Every cycle with vertices only in has size exactly and all its vertices are in . Let us then consider all cycles which exclude vertex and contain a vertex in . Let be such a cycle. As shown above, such must contain at least vertices from and therefore has at least in . Now any other cycle contains a vertex in and . Let be such a cycle. If contains at least three vertices from , then there are at least vertices in and . Suppose then that contains exactly two vertices from , say and . Then there are at least vertices of in . As there is and is a vertex cover of , at least one of and must be in and therefore . Now let us show that if has an -fault tolerant feedback vertex set of size at most , then there is a vertex cover of size at most on . Let be an -fault tolerant feedback vertex set of size at most . Consider any cycle on vertices only in . Each such cycle has size exactly and therefore all of its vertices must be in . Also every vertex in is on a cycle with vertices only in , therefore . We will show that is a vertex cover of size at most . It holds , therefore . Suppose that is not a vertex cover. Then there is some edge such that . But then consider the cycle of size consisting of and . It follows that , which contradicts the assumption that is -fault tolerant. ### 3.2 Two-Fault TolerantFvs -Fault Tolerant Feedback Vertex Set Input: Undirected graph and a weight function . Output: A minimum weight set of vertices such that for each cycle in , it holds that . Let be the input graph. First, we compute a -approximate -fault tolerant feedback vertex set for using the algorithm of Misra [pm-1fvs]. Note that contains at least two vertices from each cycle in . Our goal is to compute a vertex set that contains at least three vertices from each cycle in . Hence, for each cycle in for which , we need to pick at least one more vertex from into our solution. We first identify, which pairs of vertices from are involved in such cycles. This can be done in polynomial time, by considering each pair of vertices and checking the graph for cycles. If no such contains a cycle, then we return as a -fault tolerant feedback vertex set. Otherwise, there exists at least one cycle in such that contains exactly two vertices from it. Observe that even though does not contain three vertices from each cycle in , might contain at least three vertices from some cycles in . We shall ignore such cycles. If a cycle in intersects with at vertices and , then there exist two vertex-disjoint paths and between and in , such that . In order to find a -fault tolerant fvs that extends , we need to ensure that at least one vertex from is included in the solution. As each pair of paths is uniquely determined by the neighbors of and on and , there are at most such pairs in total and all of them can be found in time. We create a family of all such pairs of vertex disjoint paths between each pair of vertices in . More precisely, for each pair of such paths of length at least we obtain and  by removing and  from and , respectively. Then we add the pair to . If and are adjacent, then for each --path of length at least in we add to the pair , where is obtained from by removing and . Next, we use the following linear program to identify which path among each pair should be selected, from which a vertex would be picked in order to be included in the solution. Minimize ∑v∈V∖Sw(v)⋅xv (LPpairs) subject to ∑v∈V(P1)∪V(P2)xv ≥1 ∀{P1,P2}∈P 0≤xv ≤1 ∀v∈V∖S. We solve the above linear program in polynomial time using the Ellipsoid method [GrotschelLS81] (see also [GLS1988, Chapter 3]). Let be an optimal solution for the above LP and be its value. ###### Observation 3.1 Let be a -fault tolerant fvs in (not necessarily extending ) and let be the optimum value of (LP). Then, . ###### Proof We claim that the vector defined for as ˆxv={1if v∈S∗0otherwise constitutes a solution to (LP). In order to see this let be a pair of paths and let be the cycle formed by these paths together with some . Since is a -fault tolerant fvs, we have and thus as needed. Hence, . Next, we create a set of paths . For each path , we include in if holds. Through this process, we are selecting the paths from which we will include at least one vertex in our solution. Finally, we create an instance of Multicut in Forests. Consider the corresponding LP relaxation. Minimize ∑v∈V∖Sw(v)⋅yv (LPpaths) subject to ∑v∈V(P)yv ≥1 ∀P∈Px∗ 0≤yv ≤1 ∀v∈V∖S. ###### Lemma 3 Let be an optimal solution of (LP) and let be its objective value. Then, is a solution to (LP). In particular, holds, where is the value of an optimal solution to (LP). ###### Proof Recall that we have for every path , by the definition of . Thus, we have for all and clearly for all . We conclude that is a solution to (LP). We now show how to combine the -approximate -fault tolerant fvs with an approximate solution for Multicut in Forests to obtain a -approximate -fault tolerant fvs. ###### Lemma 4 Let be an -approximate -fault tolerant fvs and let be an integral solution to (LP) of weight at most times the weight of an optimal solution. Then, is an -approximate -fault tolerant fvs. ###### Proof Let  be an optimal -fault tolerant fvs. We know that . By Observation 3.1 there is a solution  to (LP) with . Thus, by Lemma 3 we have . In total we get w(S′) =w(S∪{v∈V∖S∣yv=1}) =w(S)+∑v∈V∖Sw(v)⋅yv ≤α⋅w(S∗)+2μ⋅w(S∗) =(α+2μ)w(S∗). It is not hard to see that is a -fault tolerant fvs. Indeed, if contains at least three vertices in a cycle of the input graph, so does . Thus, we can focus on a cycle  with . If this is the case, then the conditions of (LP) imply that in it holds for some (follows from the construction of (LP)). Therefore, we have . ###### Corollary 1 There is a -approximation algorithm for unweighted -Fault Tolerant FVS and -approximation algorithm for weighted -Fault Tolerant FVS. ###### Proof We begin with the -approximation algorithm for -Fault Tolerant FVS by Misra [pm-1fvs]. In polynomial time we construct (LP) and obtain an optimal solution for it. Based on that we construct and (LP) in polynomial time. By Proposition 1 or Lemma 1 one can in polynomial time find an integral solution to (LP) of weight at most times the weight of an optimal solution. By Lemma 4 this solution combined with the initial -fault tolerant fvs gives -approximate -fault tolerant fvs. The algorithm works in polynomial time as it uses polynomial-time routines. ### 3.3 Higher Fault Tolerant fvs Now we explain the procedure to scale up the algorithm from Section 3.2 to compute an -fault tolerant fvs for . -Fault Tolerant Feedback Vertex Set Input: Undirected graph , weight function . Output: A minimum weight set of vertices such that for each cycle in , it holds that . For the rest of the section we assume that is a fixed constant. We follow a recursive process to compute an -fault tolerant fvs. We start with an approximate solution for -Fault Tolerant FVS. Note that contains at least vertices from each cycle in . Similar to the process in algorithm in Section 3.2, here we identify every group of vertices that are involved in a cycle in , such that . Such cycles can be found by checking whether the graph , for such that , contains a cycle. If no such contains a cycle, then is an -fault tolerant fvs, and we return it as a solution. Else, we focus on cycles which contain exactly vertices from . We create a family of path sets in the following way. For each cycle that contains exactly  vertices of , labeled in cyclic order along as vertices , we consider the paths, say , where starts in and ends in (modulo ) and is a subpath of . We remove paths with only two vertices, and we shorten the rest by removing their end vertices, leaving us with paths , where (as some paths may have been removed). If the set of paths is non-empty, then we add it to the family . If the set is empty, then we have found a cycle of length and we report that there is no -fault tolerant feedback vertex set for as we cannot choose at least vertices on a cycle of length . ###### Observation 3.2 Construction of can be done in time. ###### Proof There are no more than subsets of of order . Each such subset can be a part of many different cycles. To find all such cycles we take each of its orderings and we fix a predecessor and a successor (taken out of the respective vertex neighborhood) of each . This constitutes at most different possibilities for each ordering of . There is at most one path from the successor of to the predecessor of (modulo ) in as it is a forest. As the paths are now fixed, we just need to check whether they form a cycle by checking that the paths are vertex disjoint, which can be done in polynomial time. Altogether, we have time. Once the family is computed, we solve the following linear program using the Ellipsoid method in polynomial time. Let be its optimal solution and its value. Minimize ∑v∈V∖Sw(v)⋅xv (LPs-tuples) subject to ∑v∈⋃si=1V(Pi)xv ≥1 ∀{P1,…,Ps}∈P 0≤xv ≤1 ∀v∈V∖S. Similarly to creftypecap 3.1 we get the following. ###### Observation 3.3 Let be an -fault tolerant fvs in and let be the optimum value of (LP). Then, . Next, we create a set of paths . For each path , we include in if . As in Section 3.2, we create an instance of Multicut in Forests and consider its LP relaxation (LP). ###### Lemma 5 Let be an -fault tolerant fvs. Let be an optimal solution of (LP) and let be its objective value. Then, is a solution to (LP) for . In particular, holds, where is the value of an optimal solution to (LP). ###### Proof Recall that we have for each path . Thus, we have for all and clearly for all . We conclude that is a solution to (LP). ###### Lemma 6 Let be a constant. Let be an -approximate -fault tolerant fvs and let be a solution to (LP) induced by  with weight at most times the optimal. Then, is an -approximate -fault tolerant fvs. ###### Proof Let  be an optimal -fault tolerant fvs. We know that . By Observation 3.3 there is a solution  to (LP) with . Thus, by Lemma 5 we have . In total we get w(S′) =w(S∪{v∈V∖S∣yv=1}) =w(S)+∑v∈V∖Sw(v)⋅yv ≤α⋅w(S∗)+μ⋅r⋅w(S∗) =(α+μr)w(S∗). Let us prove that is -fault tolerant. If contains at least vertices in a cycle of the input graph, so does . Thus, we can focus on a cycle  with . If this is the case, then the conditions of (LP) imply that in it holds for some (follows from the construction of (LP)). Therefore, we have for every such . Hence, is an -fault tolerant fvs. ###### Theorem 3.4 (precise version of Theorem 1.2) Let be a constant. There is a -approximation algorithm for -Fault Tolerant Feedback Vertex Set. ###### Proof The -Fault Tolerant FVS problem has a -approximation algorithm due to Misra [pm-1fvs]. By Lemma 6 we add to the approximation factor when devising -fault tolerant fvs from -fault tolerant fvs it follows that for an arbitrary we have -approximation algorithm for finding an -fault tolerant fvs. The algorithm by Misra [pm-1fvs] gives the -approximation for -Fault Tolerant FVS in polynomial time. We showed how to devise an -fault tolerant fvs from an -fault tolerant fvs. We use such steps to incrementally increase the fault tolerance of the fvs. In step of devising -fault tolerant fvs we use time to find as seen in Observation 3.2, then we construct and solve (LP) in polynomial time using the Ellipsoid method and then construct (LP) and solve it using either Proposition 1 or Lemma 1 in polynomial time. As is a constant we conclude that our -approximation algorithm for -Fault Tolerant FVS has polynomial time complexity. ## 4 Approximate Tracking Set In this section, we give a constant factor approximation algorithm for Tracking Paths. Let be the input graph and and the source and the target. We start by applying the following reduction rule on . This can clearly be done in polynomial time. ###### Reduction Rule 1 (Banik et al. [tr-j]) If there exists a vertex or an edge that does not participate in any - path, then delete it. We use the term reduced graph to denote a graph that has been preprocessed using Reduction Rule 1. For the sake of simplicity, after the application of reduction rule, we continue to refer to the reduced graph as . Next, we describe local source-destination pair (local - pair), a concept that has served as crucial for developing efficient algorithms for Tracking Paths [tr-j, iwoca, struct-tp, ep-planar]. For a subgraph , and vertices , we say that is a local - pair for if 1. there exists a path in from to , say , 2. there exists a path in from to , say , 3. , and 4. and . Note that a subgraph can have more than one local source-destination pair. It can be verified in time whether a pair of vertices form a local source-destination pair for by checking if there exist disjoint paths from to and to in the graph , using the disjoint path algorithm from [KAWARABAYASHI2012424]. ###### Observation 4.1 Let be a graph and let be a subgraph of . We can verify in polynomial time whether is a local - pair for . We recall the following lemma from previous work. ###### Lemma 7 ([iwoca, Lemma 2]) In a graph , if is not a tracking set for , then there exist two - paths with the same sequence of trackers, and they form a cycle  in , such that has a local source  and a local destination , and . Eppstein et al. [ep-planar] mentioned that a -fault tolerant feedback vertex set is always a tracking set. Here we use a variation of this idea to compute an approximate tracking set. Specifically, we start with a -approximate feedback vertex set and then identify the cycles that contain only one or two feedback vertices. We check if these cycles need more vertices as trackers and we use (LP) and (LP) explained in the previous section to add them. Now we present the algorithm for computing a -approximate tracking set in polynomial time. We start by computing a -approximate feedback vertex set on the reduced graph using the algorithm by Bafna et al. [BafnaBF99]. We first check whether is a tracking set for by using the tracking set verification algorithm given in [tr-j]. If it is a tracking set, we return as the solution, otherwise we proceed further. If is not a tracking set, we will find vertices on which to place additional trackers in the following way. First we identify cycles such that . Each such cycle can be obtained by taking a vertex together with a path between a pair of its neighbors in . For each vertex we check, whether (or ) is a local - pair for . If this is the case, then we distinguish two cases. If and are adjacent on , then let be the path . We add to the pair . If and are non-adjacent on , then let and be the two paths between and forming the cycle . We obtain and  by removing and  from and , respectively. Then we add the pair to . If a cycle in intersects with in vertices and , then there exist two vertex-disjoint paths and between and , such that . Hence, each such cycle is uniquely determined by the neighbors of and on and . If, furthermore, (or ) is a local - pair for , then we add some pair to . In particular, if both are of length at least  we obtain and  by removing and  from and , respectively. Then we add the pair to . If one of the paths, say , is of length (i.e., and are adjacent on ), we add to the pair , where is obtained from by removing and . Similarly to creftypecap 3.2, we have at most candidate cycles with a single vertex of and for each of them we have at most candidates on . We have cycles with two vertices of . For each of them, we check, whether (or ) is a local - pair in time. Hence, can be obtained in time. Now we use (LP) with to identify the paths on which we want to place at least one additional tracker. Let be an optimal solution of (LP), which can be obtained in polynomial time using the Ellipsoid method. We construct as the set of all paths such that . We first show the following observation. ###### Observation 4.2 Let be a reduced graph and be a tracking set for . Let be the optimum value of (LP). Then . ###### Proof Let be a vector such that for all ˆxv={1if v∈T∗0otherwise. We show that is a solution to (LP). Suppose that for some , therefore . Let be the vertices such that contains a cycle , such that are a local - pair for . Such must exist because of the way  was constructed. Since is a local - pair, there exist two distinct paths and
2021-10-24 19:56:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851823210716248, "perplexity": 631.4344247135194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00268.warc.gz"}
https://statisticalphysics.leima.is/vocabulary/transforms.html
# Transforms¶ ## Fourier Transform¶ Fourier transforms are quite useful in solving differential equations. By decomposing the functions using Fourier transform, we might be able to simplify many differential equations. Suppose we have a differential equation $\frac{\partial}{\partial x} f(x) = a g(x).$ To solve the equation, we decompose $$f(x)$$ using its Fourier transform, $f(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} f(k) e^{ikx} dk.$ Then we get $\frac{1}{2\pi}\frac{\partial}{\partial x} \int_{-\infty}^{\infty} f(k) e^{ikx} dk = a \frac{1}{2\pi}\int_{-\infty}^{\infty} g(k) e^{ikx} dk.$ The equation is then simplified into $ik f(k) = a g(k).$ Note To summarize, we simple do replacement of the differential operators. $\begin{split}\frac{\partial}{\partial x} e^{ikx}=ike^{ikx} &\implies \frac{\partial}{\partial x} \to ik \\ \frac{\partial^2}{\partial x^2} e^{ikx} = -k^2 e^{ikx} & \implies \frac{\partial^2}{\partial x^2} \to -k^2\end{split}$ ## Laplace Transform¶ Similar to Fourier transform, Laplace transform is also useful in equation solving. Laplace transform is a transform of a function of $$t$$, e.g. $$f(t)$$, to a function of $$s$$, $\mathscr{L}[f(t)] = \int_0^\infty f(t) e^{ - s t} dt .$ Some useful properties: 1. $$\mathscr{L}[\frac{d}{dt}f(t)] = s \mathscr{L}[f(t)] - f(0)$$; 2. $$\mathscr{L}[\frac{d^2}{dt^2}f(t) = s^2 \mathscr{L}[f(t)] - s f(0) - \frac{d f(0)}{dt}$$; 3. $$\mathscr{L}[\int_0^t g(\tau) d\tau ] = \frac{\mathscr{L}[f(t)]}{s}$$; 4. $$\mathscr{L}[\alpha t] = \frac{1}{\alpha} \mathscr{L}[s/\alpha]$$; 5. $$\mathscr{L}[e^{at}f(t)] = \mathscr{L}[f(s-a)]$$; 6. $$\mathscr{L}[tf(t)] = - \frac{d}{ds} \mathscr{L}[f(t)]$$. Some useful results: 1. $$\mathscr{L}[1] = \frac{1}{s}$$; 2. $$\mathscr{L}[\delta] = 1$$; 3. $$\mathscr{L}[\delta^k] = s^k$$; 4. $$\mathscr{L}[t] = \frac{1}{s^2}$$; 5. $$\mathscr{L}[t^n] = \frac{n!}{s^{n+1}}$$ 6. $$\mathscr{L}[e^{at}]= \frac{1}{s-a}$$. A very nice property of Laplace transform is $\begin{split}\mathscr{L}_s [e^{at}f(t)] &= \int_0^\infty e^{-st} e^{-at} f(t) dt \\ & = \int_0^\infty e^{-(s+a)t}f(t) dt \\ & = L_{s+a}[f(t)]\end{split}$ which is very useful when dealing with master equations. Two useful results are $\mathscr{L}[I_0(2Ft)] = \frac{1}{\sqrt{ \epsilon^2 - (2F)^2 }}$ and $\mathscr{L}[J_0[2Ft]] = \frac{1}{\sqrt{\epsilon^2 + (2F)^2}},$ where $$I_0(2Ft)$$ is the modified Bessel functions of the first kind. $$J_0(2Ft)$$ is its companion. Using the property above, we can find out $\mathscr{L}[I_0(2Ft)e^{-2Ft}] = \frac{1}{\sqrt{(\epsilon + 2F)^2 - (2F)^2}} .$ Example: Solving Differential Equations For a first order differential equation $f'(x) = x,$ we apply Laplace transform, $s F(s) - f(0) = \frac{1}{s^2},$ from which we solve $F(s) = \frac{1}{s^3} + \frac{f(0)}{s}.$ Then we lookup in the transform table, we find that $f(x) = x^2/2 + f(0).$ ## Legendre Transform¶ The geometrical meaning of Legendre transformation in thermodynamics can be illustrated by the following graph. Fig. 4 Legendre transform In the above example, we know that entropy $$S$$ is actually a function of temperature $$T$$. For simplicity, we assume that they are monotonically related like in the graph above. When we are talking about the quantity $$T \mathrm d S$$ we actually mean the area shaded with blue grid lines. Meanwhile the area shaded with orange line means $$S \mathrm d T$$. Let’s think about the change in internal energy. For this example, we only consider the thermal part, $\mathrm d U = T \mathrm d S .$ Internal energy change is equal to the the area shaded with blue lines. The area shaded with orange lines is the Helmholtz free energy, $\mathrm d A = S \mathrm d T .$ The two quantities $$T \mathrm d S$$ and $$S \mathrm d T$$ sum up to $$d(TS)$$. This is also the area change of the rectangle determined by two edges $$0$$ to $$T$$ and $$0$$ to $$S$$. This is a Legendre transform, $\mathrm d U \to \mathrm d A,$ or $T\mathrm dS \to S \mathrm d T.$ The point is that $$S(T)$$ is a function of $$T$$. However, if we know the blue area, we can find out the orange area. This means that the two functions $$A(T)$$ and $$U(S)$$ are somewhat like a pair. Choosing one of them for a specific calculation is a choice of freedom but we carry all the information in either one once the relation between $$T$$ and $$S$$ is know. The above example sheds light on Legendre transform. The mathematical form is a little bit tricky so we will illustrate it using an example. For a function $$U(T, X)$$, we find its differential as $\mathrm d U(T, X) = \frac{\partial U}{\partial T} \mathrm d T + \frac{\partial U}{\partial X} \mathrm d X.$ For convinience, we define $\begin{split}S =& \frac{\partial U}{\partial T} \\ Y =& \frac{\partial U}{\partial X}.\end{split}$ The differential of function becomes $\mathrm d U(T, X) = S \mathrm dT + Y \mathrm d X,$ where $$S$$ ($$Y$$) and $$T$$ ($$X$$) are a conjugate pair. A Legendre transform says that we change the variable of the differential from $$T$$ ($$X$$) to $$S$$ ($$Y$$). For example, we know that $S \mathrm d T = \mathrm d (ST) - T \mathrm d S.$ Plugging this into $$\mathrm d U$$, we get $\mathrm d U(T, X) - \mathrm d(ST) = - T \mathrm dS + Y \mathrm d X.$ The left hand side is defined as a new differential $\mathrm d A(S, X) = \mathrm d ( U(T, X) - ST ).$ In these calculations, $$U$$ is the internal energy and $$A$$ is the Helmholtz free energy. The transform that changes the variable from $$X$$ to $$Y$$ gives us enthalpy $$H$$. If we transform both variables then we get Gibbs free energy $$G$$. More about these thermodynamic potentials will be discussed in the following chapters. ## Refs & Note¶ 1. Zia, Royce K. P., Edward F. Redish and Susan R. McKay. “Making sense of the Legendre transform.” (2009). | Created with Sphinx and . | | | |
2020-10-23 22:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015315175056458, "perplexity": 323.278172632287}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107865665.7/warc/CC-MAIN-20201023204939-20201023234939-00610.warc.gz"}
http://math.soimeme.org/~arunram/Resources/Wiesner/TFSDTheVirasoroAlgebra.html
## Translation Functors and the Shapovalov Determinant Last updated: 10 February 2015 This is an excerpt from the PhD thesis Translation Functors and the Shapovalov Determinant by Emilie Wiesner, University of Wisconsin-Madison, 2005. ## The Virasoro Algebra The Virasoro algebra is the Lie algebra $Vir=ℂ-span{z,dk | k∈ℤ}$ with bracket $\left[,\right]$ given by $[dk,z]=0, [dj,dk]= (j-k)dj+k+ δj,-k12 (j3-j)z.$ ([MPi1995], 1.9.4). The Virasoro algebra is the universal central extension of the Lie algebra $W=ℂ\text{-span}\left\{{D}_{k} | k\in ℤ\right\}$ with relations $[Dj,Dk]= (j-k)Dj+k.$ See Section 1.3.5 for a definition of central extensions. The Lie algebra $W$ is the Witt algebra. The Witt algebra can be identified with the Lie algebra of derivations on $ℂ\left[t,{t}^{-1}\right]\text{.}$ That is, the derivations $\left\{{D}_{k}=-{t}^{k+1}\frac{d}{dt} | k\in ℤ\right\}$ are a basis for the derivations of $ℂ\left[t,{t}^{-1}\right],$ and they satisfy the relation ${D}_{j}{D}_{k}-{D}_{k}{D}_{j}=\left(j-k\right){D}_{j+k}\text{.}$ The Virasoro algebra has a regular, finite Hermitian triangular decomposition $\text{Vir}={\text{Vir}}_{-}\oplus 𝔥\oplus {\text{Vir}}_{+}$ where $Vir- = span{dn | n∈ℤ<0}; 𝔥 = span{d0,z}; Vir+ = span{dn | n∈ℤ>0}.$ The associated Hermitian anti-involution on Vir is $\varphi :\text{Vir}\to \text{Vir}$ given by $ϕ(dn)=d-n, ϕ(z)=z.$ Let $U\left(\text{Vir}\right)$ be the universal enveloping algebra of Vir. Proposition 2.0.5 shows that $U\left(\text{Vir}\right)$ inherits a triangular decomposition from Vir: $U(Vir)=U (Vir-)S (𝔥)U (Vir+), (3.1)$ where $S\left(𝔥\right)=U\left(𝔥\right)$ is the symmetric algebra of $𝔥\text{.}$ Bases for $U\left({\text{Vir}}_{-}\right)$ and $U\left({\text{Vir}}_{+}\right)$ are $\left\{{d}_{-{\lambda }_{1}}\cdots {d}_{-{\lambda }_{k}} | {\lambda }_{i}\in ℤ,{\lambda }_{1}\ge \cdots \ge {\lambda }_{k}>0\right\}$ and $\left\{{d}_{-{\lambda }_{1}}\cdots {d}_{-{\lambda }_{k}} | {\lambda }_{i}\in ℤ,{\lambda }_{1}\ge \cdots \ge {\lambda }_{k}>0\right\},$ respectively. In order to write these bases more efficiently, we introduce the following notation. For a partition $\lambda :$ ${\lambda }_{1}\ge \cdots \ge {\lambda }_{k}>0,$ the weight of $\lambda$ is $\mid \lambda \mid ={\lambda }_{1}+\cdots +{\lambda }_{k}\text{.}$ The number of parts of $\lambda$ is denoted $l\left(\lambda \right)=k\text{.}$ Define $dλ = dλk⋯dλ1 d-λ = d-λ1⋯ d-λk.$ Then, the bases for $U\left({\text{Vir}}_{-}\right)$ and $U\left({\text{Vir}}_{+}\right)$ can be rewritten as $\left\{{d}_{-\lambda } | \lambda \text{a partition}\right\}$ and $\left\{{d}_{\lambda } | \lambda \text{a partition}\right\},$ respectively. ### Category $𝒪$ Category $𝒪$ was introduced in Section 2.1. We now make a few comments specific to the Virasoro algebra. Recall ${𝔥}^{*}={\text{Hom}}_{ℂ}\left(𝔥,ℂ\right)\text{.}$ We can identify weights $\lambda \in {𝔥}^{*}$ with pairs in ${ℂ}^{2}$ by $λ↔(λ(d0),λ(z)).$ Then, the partial ordering on ${𝔥}^{*}$ is given by $μ<λ if μ(z)= λ(z) and λ(d0) -μ(d0)∈ℤ<0.$ For $\lambda \in {𝔥}^{*},$ the Verma module $M\left(\lambda \right)$ is the induced module $M(λ)=U(Vir) ⊗U(Vir+⊕𝔥) ℂλ.$ We write $M\left(h,c\right)$ for $M\left(\lambda \right),$ where $\left(h,c\right)$ is the pair identified with $\lambda \text{.}$ Similarly, we write $J\left(h,c\right)$ for the unique maximal submodule of $M\left(h,c\right)$ (Lemma 2.1.3) and $L\left(h,c\right)$ for the unique irreducible quotient of $M\left(h,c\right)\text{.}$ Using the PBW basis for $U\left({\text{Vir}}_{-}\right),$ we see $M(h,c)=⨁n≥0 M(h,c)(h+n,c), where {d-λv+ | ∣λ∣=n} is a basis for M(h,c)(h+n,c).$ Then $\text{dim} M{\left(h,c\right)}^{\left(h+n,c\right)}=p\left(n\right),$ where $p\left(n\right)$ is the number of partitions of $n\text{.}$ Recall from Section 2.1.1 that the character of a module $M$ records the dimensions of the weight spaces of $M\text{.}$ For $M\left(h,c\right),$ we have $ch(M(h,c)) = ∑n=1∞ p(n) e(h+n,c) (3.2) = e(h,c) ∏j=1∞(1-qj) (3.3)$ where $g={e}^{\left(1,0\right)}\text{.}$ ### Affine Lie Algebras The Virasoro algebra has a close relationship with affine Lie algebras. In particular, it is possible to construct a representation of the Virasoro algebra on certain modules for affine Lie algebras. Before discussing this construction, we provide a brief introduction to affine Lie algebras. Recall from Section 1.2.1 that a reductive Lie algebra $𝔤$ is a direct sum of an abelian Lie algebra and simple Lie algebras. Let $𝔤$ be a finite-dimensional reductive Lie algebra with nondegenerate bilinear form $\left(,\right)\text{.}$ (We assume this is the Killing form when $𝔤$ is semisimple.) The affine Lie algebra $\stackrel{ˆ}{𝔤}$ associated to $𝔤$ is $𝔤ˆ=(ℂ[t,t-1]⊗𝔤) ⊕ℂc⊕ℂd,$ with relations $[tm⊗x,tn⊗y]= tm+n⊗[x,y]+m δm,-n(x,y)c, [c,tm⊗x]=0, [c,d]=0, [d,tn⊗x]=ntn ⊗x,$ for $m,n\in ℤ$ and $x,y\in 𝔤\text{.}$ We define $𝔤ˆ′=[𝔤ˆ,𝔤ˆ] =ℂ[t,t-1]⊗𝔤+ℂc.$ Finally, observe that $𝔤\subseteq \stackrel{ˆ}{𝔤}$ via the identification $x↦1\otimes x\text{.}$ For $m\in ℤ$ and $x\in 𝔤,$ we adopt the notation $x(m)=tm⊗x.$ We extend the form $\left(,\right)$ to a bilinear form on all of $\stackrel{ˆ}{𝔤}$ by $(x(n),y(m))= δn,-m(x,y), (x(n),c)=0, (x(n),d)=0, (c,d)=1, (c,d)=0, (d,d)=0.$ Then $\left(,\right)$ is nondegenerate on $\stackrel{ˆ}{𝔤}\text{.}$ The amne Lie algebra $\stackrel{ˆ}{𝔤}$ has a triangular decomposition with Cartan subalgebra $𝔥ˆ=𝔥⊕ℂd⊕ℂc.$ Then ${\stackrel{ˆ}{𝔥}}^{*}$ has a basis $\left\{{\alpha }_{1},\dots ,{\alpha }_{n},\delta ,\zeta \right\}$ where $\left\{{\alpha }_{i}\right\}$ are the simple roots of $𝔤$ and $δ(𝔥)=0 δ(d)=1 δ(c)=0; ζ(𝔥)=0 ζ(d)=0 ζ(c)=1.$ The bilinear form $⟨,⟩$ on ${𝔥}^{*}$ extends to all of ${\stackrel{ˆ}{𝔥}}^{*}$ by $⟨δ,αi⟩ = 0=⟨ζ,αi⟩, ⟨δ,ζ⟩ = 1 ⟨δ,δ⟩ = 0=⟨ζ,ζ⟩.$ #### Restricted Modules and the Casimir Element A $\stackrel{ˆ}{𝔤}$ module $V$ is restricted if for all $v\in V,$ $x\left(n\right)v=0$ for each $x\in 𝔤$ for $n$ sufficiently large. In particular, simple modules $L\left(\lambda \right)$ (since they are highest weight modules) are restricted modules. The restricted completion $\stackrel{ˆ}{U\left(\stackrel{ˆ}{𝔤}\right)}$ of $U\left(\stackrel{ˆ}{𝔤}\right)$ is the set of infinite sums $\sum _{i=1}^{\infty }{x}_{i},$ ${x}_{i}\in U\left(\stackrel{ˆ}{𝔤}\right),$ such that for any restricted module $V$ and $v\in V,$ ${x}_{i}v=0$ for all but finitely many ${x}_{i}\text{.}$ Two sums are considered the same if they act the same on all restricted modules. (See [Kac1104219], 2.5 and 12.8 for more on these definitions.) Let $𝔤$ be a simple Lie algebra. Let $\left\{{u}_{i} | 1\le i\le \text{dim} 𝔤\right\}$ be a basis for $𝔤,$ and let ${u}^{i}$ be a dual basis with respect to $\left(,\right),$ so that $\left({u}_{i},{u}^{j}\right)={\delta }_{i,j}\text{.}$ We define the Casimir element for $\stackrel{ˆ}{𝔤}$ to be $Ω=2(c+g)d+ ∑iuiui+2 ∑n=1∞∑i ui(-n)ui(n)$ where $g=\frac{1}{2}⟨\theta +2\rho ,\theta ⟩$ is the dual Coxeter number of $𝔤\text{.}$ (Recall $\rho =\frac{1}{2}\sum _{\beta \in {R}_{+}}\beta \text{.)}$ Observe that $\mathrm{\Omega }\in \stackrel{ˆ}{U\left(\stackrel{ˆ}{𝔤}\right)}\text{.}$ ([Kac1104219], Theorem 2.6 and Corollary 2.6). (i) Let $x\in \stackrel{ˆ}{𝔤}\text{.}$ Then as operators on a restricted $\stackrel{ˆ}{𝔤}\text{-module,}$ $\left[\mathrm{\Omega },x\right]=0\text{.}$ (ii) For $\lambda \in {𝔥}^{*},$ $\mathrm{\Omega }$ acts on $L\left(\lambda \right)$ by $⟨\lambda +2\stackrel{ˆ}{\rho },\lambda ⟩,$ where $\stackrel{ˆ}{\rho }=\rho +\zeta \text{.}$ ### The Virasoro Algebra and Affine Lie Algebras We now construct an action of the Virasoro algebra on restricted $\stackrel{ˆ}{𝔤}\text{-modules.}$ This construction, known as the Sugawara construction, follows [KRa1987]. Let $𝔤$ be a simple or abelian Lie algebra and let $\stackrel{ˆ}{𝔤}$ be the associated affine Lie algebra. Recall that the Virasoro algebra is the universal central extension of the Lie algebra of differential operators on $ℂ\left[t,{t}^{-1}\right]\text{.}$ Let $Dk=tk+1 ddt.$ These operators have a natural action on $\stackrel{ˆ}{𝔤}\prime$ given by $[Dk,x(n)] = tk+1ddt (tn⊗x)=nx (n+k) [Dk,c] = 0.$ Note that $\left[{D}_{0},x\left(n\right)\right]=nx\left(n\right)=\left[d,x\left(n\right)\right]\text{;}$ that is, the action of ${D}_{0}$ on $\stackrel{ˆ}{𝔤}\prime$ coincides with the action of $d\text{.}$ We would like to define an action of Vir on $\stackrel{ˆ}{𝔤}\text{-modules}$ that is consistent with these relations, For $k\in ℤ$ let $Tk=12∑n∈ℤ ∑1≤i≤dim 𝔤: ui(-n)ui (n+k):∈ U(𝔤ˆ)ˆ,$ where the normal ordering $:·:$ is $:x(-n)y(m):≔ { x(-n)y(m) if -n≤m, y(m)x(-n) if -n>m.$ Note that $T0=12Ω-2 (c+g)d. (3.4)$ As we will show, the operators ${T}_{k}\in \stackrel{ˆ}{U\left(\stackrel{ˆ}{𝔤}\right)}$ mimic the action of ${D}_{k}$ on $\stackrel{ˆ}{𝔤}\prime \text{.}$ For all $k,m\in ℤ$ and $x\in 𝔤,$ $\left[{T}_{k},x\left(m\right)\right]=-\left(c+g\right)mx\left(m+k\right)\text{.}$ Proof. Let $m,n\in ℤ\text{.}$ From Equations 1.3 and 1.4, and Proposition 1.2.3 we have $∑i [ui(m),ui(n)] = ∑i[ui,ui] (m+n)+κ(ui,ui) mδm,-nc = mδm,-nc dim 𝔤; (3.5)$ $∑i [[x,ui](m),ui(n)] = ∑i[[x,ui],ui] (m+n) = 2gx(m+n); (3.6)$ and $∑i[x,ui] (m)ui(n) = ∑i,jκ ([x,ui],uj) uj(m)ui(n) = ∑i,j-κ (ui,[x,uj]) uj(m)ui(n) = ∑i,j-uj(m) κ([x,uj],ui) ui(n) = -∑juj(m) [x,uj](n). (3.7)$ We then have $[x(m),Tk] = 12∑n∈ℤ ∑i [x(m),ui(-n)ui(n+k)] ⏟ Since [x(m),c]=0, Equation 3.5 implies that we can ignore the normal ordering. = 12∑n∈ℤ ∑i [x(m),ui(-n)] ui(n+k)+ui (-n)[x(m),ui(n+k)] = 12∑n∈ℤ ∑i[x,ui] (m-n)ui (n+k)+mδm,n κ(x,ui)c ui(n+k) +12∑n∈ℤ ∑iui(-n) [x,ui] (m+n+k)+m δm,-(n+k) κ(x,ui)c ui(-n) = 12∑n $\square$ For $j,k\in ℤ,$ $\left[{T}_{j},{T}_{k}\right]=\left(c+g\right)\left(j-k\right){T}_{j+k}+{\delta }_{j,-k}\frac{{j}^{3}-j}{12}\left(\text{dim} 𝔤\right)c\left(c+g\right)\text{.}$ Proof. $[Tj,Tk] = 12∑n∈ℤ ∑i[Tj,ui(-n)ui(n+k)] = 12∑n∈ℤ∑i Tj,ui(-n) ui(n+k)+ui (-n) [Tj,ui(n+k)] ⏟We can again ignore the normalordering from Equation 3.5 = 12∑n∈ℤ∑i nui(-n+j) ui(n+k) (c+g)-(n+k) ui(-n)ui (n+j+k)(c+g) ⏟From the previous lemma = 12(c+g)∑n∈ℤ ∑in:ui(-n+j)ui(n+k): +12(c+g) ∑n $\square$ Suppose that $V$ is a $\stackrel{ˆ}{𝔤}\text{-module}$ where $c$ acts by a scalar $M\text{.}$ We will call $M$ the level of $V\text{.}$ Suppose that $V$ is a restricted $\stackrel{ˆ}{𝔤}\text{-module}$ with level $M\ne -g\text{.}$ Then $dk↦1M+gTk$ defines an action of $\text{Vir}$ on $V$ with $z↦MM+g dim 𝔤.$ Proof. This follows from the previous lemma. $\square$ Let $𝔤$ be a reductive Lie algebra. Then $𝔤=\underset{i=1}{\overset{k}{⨁}}{𝔤}_{i},$ where ${𝔤}_{i}$ is simple or abelian. For each $1\le i\le k,$ suppose ${V}_{i}$ is a restricted ${\stackrel{ˆ}{𝔤}}_{i}^{\prime }\text{-module,}$ with level ${M}_{i}\text{.}$ The above construction gives an action of Vir on ${V}_{i}\text{;}$ denote the action of ${d}_{k}$ by ${d}_{k}^{{𝔤}_{i}}\text{.}$ Note that $\stackrel{ˆ}{𝔤}\prime =\underset{i=1}{\overset{k}{⨁}}{\stackrel{ˆ}{𝔤}}_{i}^{\prime }\text{.}$ Therefore, the tensor product ${V}_{1}\otimes \cdots \otimes {V}_{k}$ is a $\stackrel{ˆ}{𝔤}\prime \text{-module}$ (where ${\stackrel{ˆ}{𝔤}}_{i}^{\prime }$ acts on ${V}_{i}\text{).}$ Define $dk𝔤=∑i=1k dk𝔤i.$ The map ${d}_{k}↦{d}_{k}^{𝔤}$ defines a representation of $\text{Vir}$ on $V$ with $z↦∑i=1k z𝔤i=∑i=1k MiMi+gi dim 𝔤i.$ Proof. Since ${\stackrel{ˆ}{𝔤}}_{i}^{\prime }$ commutes with ${\stackrel{ˆ}{𝔤}}_{j}^{\prime },$ the operators ${d}_{k}^{{𝔤}_{i}}$ and ${d}_{k}^{{𝔤}_{j}}$ commute, and the result follows. $\square$ For the proof of Theorem 3.4.3, we will use a slight modification of the above construction. Let $𝔤$ be a reductive Lie algebra and let $𝔭\subseteq 𝔤$ be a reductive subalgebra of $𝔤\text{.}$ Then, for any restricted $𝔤\text{-module,}$ we can construct representations of Vir corresponding to both $𝔤$ and $𝔭\text{.}$ Denote the Vir-operators corresponding to $𝔤$ and $𝔭$ by ${d}_{k}^{𝔤},$ ${z}^{𝔤}$ and ${d}_{k}^{𝔭},$ ${z}^{𝔭},$ respectively. Define an action of ${d}_{k}$ on restricted $𝔤\text{-modules}$ by $dk↦dk𝔤-𝔭= dk𝔤-dk𝔭.$ The operators ${d}_{k}$ form a representation of the Virasoro algebra with $z$ acting by $z↦{z}^{𝔤-𝔭}={z}^{𝔤}-{z}^{𝔭}\text{.}$ Moreover, the action of $\text{Vir}$ commutes with the action of $\stackrel{ˆ}{𝔭}\prime \text{.}$ Proof. Since $\left[{d}_{k}^{𝔤},x\left(n\right)\right]=nx\left(n+k\right)=\left[{d}_{k}^{𝔭},{t}^{n}x\left(n\right)\right]$ for $x\left(n\right)\in \stackrel{ˆ}{𝔭}\prime ,$ $\left[{d}_{k}^{𝔤-𝔭},\stackrel{ˆ}{𝔭}\prime \right]=0\text{.}$ This implies $\left[{d}_{k}^{𝔤-𝔭},{d}_{k}^{𝔭}\right]=0\text{.}$ Therefore $[dj𝔤-𝔭,dk𝔤-𝔭] = [dj𝔤,dk𝔤]- [dj𝔭,dk𝔭] = (j-k)dj+k𝔤-𝔭 +δj,-kj3-j12 (z𝔤-z𝔭).$ $\square$ #### The Affine Lie Algebra $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}$ The proof of the determinant formula given in the next section relies specifically on the representions of the Virasoro algebra on $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\text{-modules.}$ We fix the simple root $\alpha$ of ${sl}_{2}\left(ℂ\right)\text{.}$ Then, ${\stackrel{ˆ}{𝔥}}^{*}=ℂ\alpha \oplus ℂ\delta \oplus ℂ\zeta \text{.}$ ([KRa1987] 11.4 and 12.1). Let $\lambda \in {\stackrel{ˆ}{𝔥}}^{*}$ such that $\lambda =m\zeta +\frac{n}{2}\alpha ,$ $m\ge n\ge 0\text{.}$ Set $q={e}^{-\zeta }\text{.}$ Then, $ch(L(λ)) ch(L(ζ))= ∑k∈I ψm,n,kch (L(ζ+λ-kα))$ where $I = { k∈ℤ | -12 (m+1-n)≤k≤ n2 } ; ψm,n,k = (fm,n,k-fm,n,n+1-k) ∏j=1∞(1-qj) ; fm,n,k = ∑j∈ℤ q(m+2)(m+3)j2+(n+1+2k(m+2))j+k2 .$ Also, $ch(L(λ)) ch(L(ζ)) =∑k∈I ∑j∈ℤ≥0 Δm,n,kj ch(L(ζ+λ-kα-jδ)) (3.8)$ where ${\mathrm{\Delta }}_{m,n,k}^{j}\in {ℤ}_{\ge 0}$ are such that ${\psi }_{m,n,k}=\sum _{j\in {ℤ}_{\ge 0}}{\mathrm{\Delta }}_{m,n,k}^{j}{q}^{j}\text{.}$ The minimum value of $j$ for which ${\mathrm{\Delta }}_{m,n,k}^{j}$ is nonzero is ${k}^{2}\text{.}$ For dominant integral weights $\mu$ and $\gamma$ of $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)},$ the tensor product of $L\left(\mu \right)\otimes L\left(\gamma \right)$ is completely reducible ([Kac1104219], Corollary 10.7). (A weight $\mu \in {\stackrel{ˆ}{𝔥}}^{*}$ is dominant integral if $⟨\mu ,\alpha ⟩,⟨\mu ,\delta ⟩,⟨\mu ,\zeta ⟩\in {ℤ}_{\ge 0}\text{.)}$ Therefore, Equation 3.8 implies that as $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\text{-modules}$ $L(ζ)⊗L(λ) ≅ ⨁k∈Ij∈ℤ≥0 L(ζ+λ-kα-jδ)⊕Δm,n,kj ≅ ⨁k∈Ij≥k2 L(ζ+λ-kα-jδ)⊕Δm,n,kj.$ We use the construction from the previous section, with $𝔭={sl}_{2}\left(ℂ\right)$ and $𝔤={sl}_{2}\left(ℂ\right)\oplus {sl}_{2}\left(ℂ\right)\text{.}$ (We embed ${sl}_{2}\left(ℂ\right)$ in ${sl}_{2}\left(ℂ\right)\oplus {sl}_{2}\left(ℂ\right)$ via the diagonal map: $x↦x\oplus x\text{.)}$ Let $V$ and $W$ be restricted $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\text{-modules.}$ For $x\oplus y\in \stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\oplus \stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}$ and $u\otimes w\in V\otimes W,$ define $\left(x\oplus y\right)\left(v\otimes w\right)=xv\otimes w+v\otimes yw\text{.}$ Therefore, $dk𝔤=dksl2(ℂ) ⊗1+1⊗dksl2(ℂ). (3.10)$ Let $\lambda \in {\stackrel{ˆ}{𝔥}}^{*}$ such that $\lambda =m\zeta +\frac{n}{2}\alpha ,$ $m\ge n\ge 0\text{.}$ We consider the action of Vir on $L\left(\lambda \right)\otimes L\left(\zeta \right)\text{.}$ Proposition 3.2.1 and Equation 3.4 imply that • ${d}_{0}^{{sl}_{2}\left(ℂ\right)}$ acts on $L\left(\lambda \right)$ by $\frac{\left(\lambda +2\stackrel{ˆ}{\rho },\lambda \right)}{m+2}-2d\text{;}$ • ${d}_{0}^{{sl}_{2}\left(ℂ\right)}$ acts on $L\left(\zeta \right)$ by $\frac{\left(\zeta +2\stackrel{ˆ}{\rho },\zeta \right)}{1+2}-2d\text{;}$ • ${d}_{0}^{𝔭}$ acts on $L\left(\lambda \right)\otimes L\left(\zeta \right)$ by $\frac{1}{m+1+2}\mathrm{\Omega }-2\left(d\otimes 1+1\otimes d\right)\text{.}$ (The dual Coxeter number of ${sl}_{2}\left(ℂ\right)$ is $g=2\text{.)}$ Therefore, ${d}_{0}^{𝔤-𝔭}$ acts on $L\left(\lambda \right)\otimes L\left(\zeta \right)$ by $d0𝔤-𝔭 = d0𝔤-d0𝔭= d0sl2(ℂ) ⊗1+1⊗d0sl2(ℂ) -d0𝔭 (3.11) = 12 ( ⟨λ+2ρˆ,λ⟩m+2+ ⟨ζ+2ρˆ,ζ⟩1+2- 1m+1+2Ω ) = n(n+2)4(m+2)- 12(m+3)Ω (3.12)$ Also, $z𝔤-𝔭= mm+23+ 11+23- m+1m+1+23= 1-6(m+2)(m+3). (3.13)$ ### A Determinant Formula for $M\left(\lambda \right)$ Recall the Hermitian anti-involution $\varphi :U\left(\text{Vir}\right)\to U\left(\text{Vir}\right)$ denned by $\varphi \left({d}_{k}\right)={d}_{-k},$ $\varphi \left(z\right)=z\text{.}$ For $\left(h,v\right)\in {ℝ}^{2},$ we use this to define an Hermitian form $⟨,⟩:M\left(h,c\right)×M\left(h,c\right)\to ℂ$ by • $⟨{v}^{+},{v}^{+}⟩=1,$ where ${v}^{+}$ is a (fixed) generator of $M\left(h,c\right)\text{;}$ • $⟨xv,\stackrel{\sim }{v}⟩=⟨v,\varphi \left(x\right)\stackrel{\sim }{v}⟩$ for $x\in U\left(\text{Vir}\right),$ $v,\stackrel{\sim }{v}\in M\left(h,c\right)\text{.}$ As we saw in the previous chapter, the form $⟨,⟩$ has two important properties: • $M{\left(h,c\right)}^{\left(h+n,c\right)}\perp M{\left(h,c\right)}^{\left(h+m,c\right)}$ for $m\ne n$ (Lemma 2.4.1); • $\text{Rad}⟨,⟩=J\left(h,c\right)$ (Lemma 2.4.2). Therefore, the determinant $\text{det}\left(M{\left(h,c\right)}^{\left(h+n,c\right)}\right)=\text{det}{\left({d}_{-\lambda }{v}^{+},{d}_{-\stackrel{\sim }{\lambda }}{v}^{+}\right)}_{\mid \lambda \mid =n=\mid \stackrel{\sim }{\lambda }\mid }$ provides a tool to study $J\left(h,c\right)\text{.}$ Example. Below, $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}$ (for $\left(h,c\right)\in {ℝ}^{2}\text{)}$ is computed for $n=1,2\text{.}$ $det M(h,c)(h+1,c) = det(⟨d-1v+,d-1v+⟩) = ⟨v+,d1d-1v+⟩ = ⟨v+,(d-1d1+2d0)v+⟩ = ⟨v+,2hv+⟩ = 2h.$ $det M(h,c)(h+2,c) = det ( ⟨d-12v+,d-12v+⟩ ⟨d-2v+,d-12v+⟩ ⟨d-12v+,d-2v+⟩ ⟨d-2v+,d-2v+⟩ ) = det ( 8h2+4h 6h 6h 4h+c/2 ) = 2h(16h2+2(c-5)h+c)$Theorem 3.4.3 gives a general formula for $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}\text{.}$ The highest power of $h$ in $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}$ is $∑r,s∈ℤ>01≤rs≤n p(n-rs),$ and the coefficient of this term is $∏r,s∈ℤ>01≤r≤s≤n ((2r)ss!) p(n-rs)- p(n-r(s+1)) .$ Proof. Consider the entries of $A{\left(h,c\right)}^{\left(h+n,c\right)}={\left(⟨{d}_{-\lambda }{v}^{+},{d}_{-\stackrel{\sim }{\lambda }}⟩\right)}_{\mid \lambda \mid =n=\mid \stackrel{\sim }{\lambda }\mid }\text{.}$ Let $\lambda$ and $\stackrel{\sim }{\lambda }$ be partitions of $n\text{.}$ Writing ${d}_{\lambda }{d}_{-\stackrel{\sim }{\lambda }}$ in terms of the decomposition of $U\left(\text{Vir}\right)$ in Equation 3.1, we have that $dλd-λ∼= ∑ν,μ partitions d-νpν,μ (d0,z)d-μ, (3.14)$ where ${p}_{\nu ,\mu }\left({d}_{0},z\right)$ is a polynomial in ${d}_{0}$ and $z\text{.}$ Then $⟨{d}_{-\lambda }{v}^{+},{d}_{-\stackrel{\sim }{\lambda }}{v}^{+}⟩=⟨{v}^{+},{d}_{\lambda }{d}_{-\stackrel{\sim }{\lambda }}{v}^{+}⟩={p}_{0,0}\left(h,c\right)\text{.}$ Now consider ${p}_{0,0}\left(h,c\right)$ more closely. We can use the relations $djdk = dkdj+(j-k) dj+kif j≠-k dkd-k = 2kd0+ k3-k12z$ to rearrange ${d}_{\lambda }{d}_{-\stackrel{\sim }{\lambda }}$ as in (3.14). These imply that, as a polynomial in $h,$ the degree of $\left({p}_{0,0}\left(h,c\right)\right)$ is less than or equal to $l\left(\lambda \right),l\left(\stackrel{\sim }{\lambda }\right)\text{;}$ and the degree of $\left({p}_{0,0}\left(h,c\right)\right)=l\left(\lambda \right)$ if and only if $\lambda =\stackrel{\sim }{\lambda }\text{.}$ Therefore, for any given row of $A{\left(h,c\right)}^{\left(h+n,c\right)},$ the entry with the highest powers of $h$ is the diagonal entry. Thus, the highest power of $h$ in the determinant comes from the product of the diagonal entries in $A{\left(h,c\right)}^{\left(h+n,c\right)},$ The degree of this term is $∑∣λ∣=n l(λ) = ∑r,s∈ℤ>01≤rs≤n the number of partitions of n with at least s parts of size r ⏟A partition with t parts of size r will be counted for s=1,2,…,t. = ∑r,s∈ℤ>01≤rs≤n p(n-rs) ⏟By removing s parts of size rwe obtain a partition of n-rs.$ We now compute the coefficient of this term. A partition may be written as $\lambda =\left({r}_{1}^{{s}_{1}},\dots ,{r}_{j}^{{s}_{j}}\right)\text{.}$ Note that $drd-rs = ( d-rdr+2rd0 +r3-r12z ) d-rs-1 = d-rdrd-rs-1 +d-rs-1 ( 2rd0+2r2(s-1) +r3-r12z ) = d-rsdr+ d-rs-1 ( 2rsd0+r2s(s-1) +(r3-r)s12z ) ,$ and so ${d}_{r}^{s}{d}_{-r}^{s}={d}_{-r}^{s}{d}_{r}^{s}+{\left(2r\right)}^{s}s!{d}_{0}^{s}+$ terms of lower degree in ${d}_{0}\text{.}$ Therefore, the coefficient of the highest power of $h$ in a diagonal entry $⟨{d}_{-{r}_{1}}^{{s}_{1}}\cdots {d}_{-{r}_{j}}^{{s}_{j}}{v}^{+},{d}_{-{r}_{1}}^{{s}_{1}}\cdots {d}_{-{r}_{j}}^{{s}_{j}}{v}^{+}⟩$ is ${\left(2{r}_{j}\right)}^{{s}_{j}}\left({s}_{j}\right)!,$ and the coefficient of the highest power of $h$ in $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}$ is $∏∣λ∣=nλ=(r1s1,…,rjsj) (2rj)sj(sj)!= ∏r,s∈ℤ>0,1≤r≤s≤n ((2r)ss!) p(n-rs)- p(n-r(s+1)) .$ $\square$ Since the highest power of $h$ in $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}$ does not involve $c,$ we fix $c$ and think of $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}$ as a polynomial in $h\text{.}$ Fix $c\in ℝ\text{.}$ Let ${h}_{0}\in ℝ$ and suppose $\left(h-{h}_{0}\right)$ divides $\text{det} M{\left(h,c\right)}^{\left(h+k,c\right)}\text{.}$ Then ${\left(h-{h}_{0}\right)}^{p\left(n-k\right)}$ divides $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}\text{.}$ Proof. Suppose $\left(h-{h}_{0}\right)$ divides $\text{det} M{\left(h,c\right)}^{\left(h+k,c\right)}\text{.}$ This implies ${A}_{k}\left({h}_{0},c\right)$ is degenerate. In other words, there is a vector ${\left({a}_{1},\dots ,{a}_{p\left(k\right)}\right)}^{T},$ ${a}_{i}\in ℂ$ and ${a}_{i}\ne 0$ for at least one $i,$ such that ${A}_{k}\left({h}_{0},c\right){\left({a}_{1},\dots ,{a}_{p\left(k\right)}\right)}^{T}=0\text{.}$ Then. $Ak(h,c) (a1⋮ap(k))= (P1⋮Pp(k)),$ where the ${P}_{i}$ are polynomials in $h$ which are divisible by $\left(h-{h}_{0}\right)\text{.}$ Define $\stackrel{\sim }{v}=\sum _{i=1}^{p\left(k\right)}{a}_{i}{d}_{-{\lambda }^{\left(i\right)}}{v}^{+}\text{.}$ We then have $\left(h-{h}_{0}\right)$ divides ${P}_{i}=⟨{d}_{-{\lambda }^{\left(i\right)}}{v}^{+},\stackrel{\sim }{v}⟩\text{.}$ Consider $\stackrel{\sim }{B}=\left\{{d}_{-\lambda }\left(\sum _{i=1}^{p\left(k\right)}{a}_{i}{d}_{-{\lambda }^{\left(i\right)}}\right) | \mid \lambda \mid =n-k\right\}\subseteq U{\left({\text{Vir}}_{-}\right)}^{\left(n,0\right)}\text{.}$ (Here we view $U\left({\text{Vir}}_{-}\right)$ as a Vir-module under the adjoint action.) This set is linearly independent in $U\left({\text{Vir}}_{-}\right)$ and can be extended to a set $B$ of basis vectors for $U{\left({\text{Vir}}_{-}\right)}^{\left(n,0\right)}\text{.}$ Let $P$ be the matrix taking $B$ to $\left\{{d}_{-\lambda } | \mid \lambda \mid =n\right\}\text{.}$ Then the entries of $P$ are in $ℂ$ and $\text{det}\left(P\right)\ne 0\text{.}$ Now, for ${d}_{-\lambda }\left(\sum _{i=1}^{p\left(k\right)}{a}_{i}{d}_{-{\lambda }^{\left(i\right)}}\right)\in \stackrel{\sim }{B},$ $d-λ∑i=1p(k) aid-λ(i)v+ =d-λv∼.$ Also, $\left(h-{h}_{0}\right)$ divides $⟨{d}_{-\lambda }\stackrel{\sim }{v},w⟩$ for all $w\in M\left(h,c\right)\text{.}$ Then $(h-h0)p(n-k) |det (⟨Xiv+,Xjv+⟩) Xi,Xj∈B‾ .$ Finally, $det M(h,c)(h+n,c) = det ( Pt (⟨Xiv+,Xjv+⟩) Xi,Xj∈B‾ P ) = det(Pt) det(P)det (⟨Xiv+,Xjv+⟩) Xi,Xj∈B‾ = det(P)2det (⟨Xiv+,Xjv+⟩) Xi,Xj∈B‾ .$ Since $\text{det}\left(P\right)\ne 0,$ this implies ${\left(h-{h}_{0}\right)}^{p\left(n-k\right)}$ divides $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}\text{.}$ $\square$ ([KRa1987], [FFu1990]). For $\left(h,c\right)\in {ℝ}^{2}$ and $n\in {ℤ}_{\ge 0},$ $det M(h,c)(h+n,c)= ∏r,s∈ℤ>0,1≤r≤s≤n ((2r)ss!) p(n-rs)-p(n-r(s+1)) ∏r,s∈ℤ>0 (h-hr,s) p(n-rs) ,$ where $hr,s(c)=148 ( (13-c)(r2+s2) +(c-1)(c-25) (r2-s2)-24rs-2 +2c ) .$ Proof. Given lemmas 3.4.1 and 3.4.2, we only need to show that $\left(h-{h}_{r,s}\left(c\right)\right)$ divides $\text{det} M{\left(h,c\right)}^{\left(h+rs,c\right)}\text{.}$ We will use the representation of Vir on restricted $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\text{-modules}$ to prove this. Recall (from Equation 3.9) that, for $\lambda =m\zeta +\frac{n}{2}\alpha$ $\text{(}m\ge n>0\text{),}$ we can write the tensor product $L\left(\zeta \right)\otimes L\left(\lambda \right)$ of $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\text{-modules}$ as $L(ζ)⊗L(λ) = ⨁k∈Ij∈ℤ≥0 L(ζ+λ-kα-jδ)⊕Δm,n,kj, = ⨁k∈Ij≥k2 L(ζ+λ-kα-jδ)⊕Δm,n,kj.$ Let ${U}_{m,n,k}^{j}$ be the space of highest weight vectors of weight $\zeta +\lambda -k\alpha -j\delta$ in $L\left(\zeta \right)\otimes L\left(\lambda \right)\text{.}$ Then, $\text{dim} {U}_{m,n,k}^{j}={\mathrm{\Delta }}_{m,n,k}^{j}$ and ${U}_{m,n,k}=\underset{j\in {ℤ}_{\ge 0}}{⨁}{U}_{m,n,k}^{j}$ is the space of highest weight vectors of weight $\zeta +\lambda -k\alpha$ for $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\prime \text{.}$ From Section 3.3, we know that $L\left(\lambda \right)\otimes L\left(\zeta \right)$ is a Vir-module. Since the action of $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\prime$ and Vir commute (Proposition 3.3.5), we also have that ${U}_{m,n,k}$ is a Vir-module. Moreover, given Equations 3.11 and 3.13 and Proposition 3.2.1, it is clear that ${U}_{m,n,k}^{j}$ is a weight space for the action of Vir such that • for all $v\in {U}_{m,n,k},$ $zv=\left(1-\frac{6}{\left(m+2\right)\left(m+3\right)}\right)v\text{;}$ • for $v\in {U}_{m,n,k}^{j},$ $d0v = ( n(n+2)4(m+2)- 12(m+3)Ω ) v = ( n(n+2)4(m+2)+ j-(n-2k)(n-2k+2)4(m+3) ) v.$ Define $hr,sm = ((m+3)r-(m+2)s)2-1 4(m+2)(m+3) cm = 1-6(m+2)(m+3),$ where $r = n+1, r = m-n+1, s = n+1-2k if k≥0 s = m-n+2+2k, if k<0.$ Note that $\left(r,s\right)↦\left(m+2-r,m+3-s\right)$ switches these definitions. $\square$ According to Proposition 3.3.6, the minimum value of $j$ for which ${U}_{m,n,k}^{j}\ne 0$ is $j={k}^{2}\text{.}$ Therefore, as a Vir-module, ${U}_{m,n,k}$ has highest weight $\left({h}_{r,s}^{m},{c}_{m}\right)\text{.}$ Since ${\mathrm{\Delta }}_{m,n,k}^{j}<\infty ,$ this shows that ${U}_{m,n,k}\in {𝒪}_{\text{Vir}}\text{.}$ The character of ${U}_{m,n,k}$ is $ch Um,n,k = e(hr,sm,cm) ∑j≥k2dim Um,n,kjqj-k2 = e(hr,sm,cm) q-k2ψm,n,k = e(hr,sm,cm) q-k2 ∏i=1∞(1-qi) (fm,n,k-fm,n,n+1-k) = e(hr,sm,cm) 1 ∏i=1∞(1-qi) × ( ∑j∈ℤ q(m+2)(m+3)j2+((m+3)r-(m+2)s)- ∑j∈ℤ q(m+2)(m+3)j2+((m+3)r+(m+2)s)+rs ) = e(hr,sm,cm) 1∏i=1∞(1-qi) ⏟=ch M(hr,sm,cm) ( 1-qrs- q(m+2-r)(m+3-s)+ terms of degree>r∼s∼ ) .$ (Here we are intentionally confusing $q={e}^{\left(1,0\right)}$ (for the Virasoro algebra) and $q={e}^{-\zeta }$ (for $\stackrel{ˆ}{{sl}_{2}\left(ℂ\right)}\text{).)}$ Since the maximum weight for ${U}_{m,n,k}$ is $\left({h}_{r,s}^{m},{c}_{m}\right),$ $L\left({h}_{r,s}^{m},{c}_{m}\right)\subseteq {U}_{m,n,k}\text{.}$ Therefore, $\text{ch} L\left({c}_{m},{h}_{r,s}^{m}\right)\le \text{ch} {U}_{m,n,k}^{m}\text{.}$ Let $\left(\stackrel{\sim }{r},\stackrel{\sim }{s}\right)$ be whichever of the pairs $\left(r,s\right),$ $\left(m+2-r,m+3-s\right)$ has minimum product. Note that ${h}_{\stackrel{\sim }{r},\stackrel{\sim }{s}}^{m}={h}_{r,s}^{m}\text{.}$ The coefficient of ${q}^{\stackrel{\sim }{r}\stackrel{\sim }{s}}$ in $\text{ch} L\left({h}_{r,s}^{m},{c}_{m}\right)$ is less than the coefficient of ${q}^{\stackrel{\sim }{r}\stackrel{\sim }{s}}$ in $\text{ch} M\left({h}_{r,s}^{m},{c}_{m}\right),$ implying $\text{dim} L{\left({h}_{r,s}^{m},{c}_{m}\right)}^{\left({h}_{r,s}^{m}+\stackrel{\sim }{r}\stackrel{\sim }{s},{c}_{m}\right)}<\text{dim} M{\left({h}_{r,s}^{m},{c}_{m}\right)}^{\left({h}_{r,s}^{m}+\stackrel{\sim }{r}\stackrel{\sim }{s},{c}_{m}\right)}\text{.}$ Then, $J{\left({h}_{r,s}^{m},{c}_{m}\right)}^{\left({h}_{r,s}^{m}+\stackrel{\sim }{r}\stackrel{\sim }{s},{c}_{m}\right)}=\text{Rad} {⟨,⟩}^{\left({h}_{r,s}^{m}+\stackrel{\sim }{r}\stackrel{\sim }{s},{c}_{m}\right)}\ne 0\text{.}$ Since $\stackrel{\sim }{r}\stackrel{\sim }{s}\le rs,$ $J{\left({h}_{r,s}^{m},{c}_{m}\right)}^{\left({h}_{r,s}^{m}+rs,{c}_{m}\right)}\ne 0\text{.}$ We then have $\text{det}\left(M{\left({h}_{r,s}^{m},{c}_{m}\right)}^{\left({h}_{r,s}^{m}+rs,{c}_{m}\right)}\right)=0\text{.}$ Since ${h}_{r,s}\left({c}_{m}\right)={h}_{r,s}^{m},$ $\text{det} M{\left(h,c\right)}^{\left(h+rs,c\right)}$ vanishes at infinitely many points along the curve $h={h}_{r,s}\left(c\right)\text{.}$ Therefore, $\left(h-{h}_{r,s}\left(c\right)\right)$ divides $\text{det} M{\left(h,c\right)}^{\left(h+rs,c\right)}\text{.}$ ### Blocks Recall that we define an equivalence relation $\sim$ on the weights ${𝔥}^{*}$ of Vir generated by the relation $\lambda \sim \mu$ if $\left[M\left(\lambda \right):L\left(\mu \right)\right]>0\text{.}$ The blocks of the Virasoro algebra are the equivalence classes of $\sim \text{.}$ Prom Theorem 2.3.6, we know that $\left[M\left(\lambda \right):L\left(\mu \right)\right]>0$ if and only if $M\left(\mu \right)\subseteq M\left(\lambda \right)\text{.}$ We will use this alternative formulation of $\sim$ in order to describe the blocks of Vir. #### Blocks and the Determinant Formula For $r,s\in ℤ,$ define $𝒞r,s(h,c)= { (h-148((13-c)(r2+s2)-24rs-2+2c))2 r≠s -1482(c-1) (c-25)(r2-s2)2 h-(r2-1)(1-c)24 r=s$ Viewing ${𝒞}_{r,s}\left(h,c\right)$ as a polynomial in $h,$ we have $𝒞r,s(h,c)= { (h-hr,s(c)) (h-hs,r(c)) r≠s, h-hr,r(c) r=s.$ Therefore, the determinant formula for $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}$ can be rewritten as $det M(h,c)(h+n,c)= ∏r,s∈ℤ>0,1≤r≤s≤n ((2r)ss!) p(n-rs)-p(n-r(s+1)) (𝒞r,s(h,c))p(n-rs). (3.15)$ For $\left(h,c\right)\in {ℝ}^{2},$ we know that $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}=0$ if and only if $J{\left(h,c\right)}^{\left(h+n,c\right)}\ne 0\text{.}$ Equation 3.15 implies that if $r,s\in {ℤ}_{>0}$ are such that the product $rs$ is minimal with ${𝒞}_{r,s}\left(h,c\right)=0,$ then $J{\left(h,c\right)}^{\left(h+rs,c\right)}\ne 0$ and $J{\left(h,c\right)}^{\left(h+n,c\right)}=0$ for all $n Therefore, any vector $0\ne v\in J{\left(h,c\right)}^{\left(h+rs,c\right)}$ is a highest weight vector and so $M\left(h+rs,c\right)\subseteq M\left(h,c\right)\text{.}$ Theorem 3.5.1 shows that for any $r,s\in {ℤ}_{>0}$ such that ${𝒞}_{r,s}\left(h,c\right)=0,$ $M\left(h+rs,c\right)\subseteq M\left(h,c\right)$ and that these embeddings produce a complete description of the submodule structure of $M\left(h,c\right)\text{.}$ For fixed $r,s,$ $r\ne s,$ the curves ${𝒞}_{r,s}\left(h,c\right)=0$ are hyperbolas. Below is the curve ${𝒞}_{1,2}\left(h,c\right)=0\text{.}$ $-20 c 20 40 -4 -2 h 4 2$ Also note that • ${𝒞}_{r,s}\left(h,c\right)={𝒞}_{-r,-s}\left(h,c\right),$ • ${𝒞}_{-r,s}\left(h,c\right)=0$ if and only if ${𝒞}_{r,s}\left(h-rs,c\right)=0,$ • ${𝒞}_{r,s}\left(h,c\right)={𝒞}_{-r,s}\left(1-h,26-c\right)\text{.}$ For fixed $\left(h,c\right)\in {ℝ}^{2},$ ${𝒞}_{r,s}\left(h,c\right)$ can be factored into terms linear in $r$ and $s\text{:}$ $𝒞r,s(h,c)= K(pr+qs+m) (pr+qs-m) (qr+ps+m) (qr+ps-m),$ where $K,p,q,m\in ℂ$ such that $pq+qp= c-136; 4pqh+(p+q)2 =m2;K= 16p2q2.$ Thus, for fixed $\left(h,c\right),$ the solutions to the equation ${𝒞}_{r,s}\left(h,c\right)=0$ form two sets of parallel lines. The figure below illustrates the example ${𝒞}_{r,s}\left(0,0\right)=0\text{.}$ $-1.0 -0.5 0.5 1.0 -2 -1 1 2$ To find all integer solutions to ${𝒞}_{r,s}\left(h,c\right)=0,$ we only need to consider one line, say $pr+qs+m=0\text{.}$ (If $\left(r,s\right)$ is a point on any of the other lines, $\left(-r,-s\right),$ $\left(s,r\right)$ or $\left(-s,-r\right)$ will lie on the line $pr+qs+m=0\text{.)}$ We fix one of the lines and call it ${ℒ}_{\left(h,c\right)}\text{.}$ Theorem 3.5.1 will show that the integer points $\left(r,s\right)$ on this line encode the embeddings $M\left(h\prime ,c\prime \right)\subseteq M\left(h,c\right)\subseteq M\left(h″,c″\right)\text{.}$ Note that a line passes through 0, 1, or infinitely many integer points. (If the line passes through two integer points, it has rational slope and therefore passes through infinitely many integer points.) In other words, there are 0, 1, or infinitely many curves ${𝒞}_{r,s}\left(h,c\right)=0,$ $r,s\in ℤ,$ passing through a fixed point $\left(h,c\right)\text{.}$ Below we include a partial picture of curves ${𝒞}_{r,s}\left(h,c\right)=0$ for values of $c$ near $c=1\text{.}$ $0.2 0.4 0.6 0.8 1.0 c 1 2 3 4 5 h$ There are three points in the picture where multiple curves intersect: $\left(h,c\right)=\left(0,1\right),\left(1,1\right),\left(4,1\right)\text{.}$ As Theorem 3.5.1 shows, these weights belong to the block $\left[\left(0,1\right)\right]=\left\{\left({m}^{2},1\right) | m\in ℤ\right\}\text{.}$ The line $ℒ\left(h,c\right)$ has nonzero slope. Thus, if it passes through infinitely many integer points $\left(r,s\right)$ with $rs>0$ it must pass through finitely many points $\left(r,s\right)$ with $rs<0,$ and vice versa. ([FFu1990]). Suppose $r,s\in {ℤ}_{>0}$ such that ${𝒞}_{r,s}\left(h,c\right)=0\text{.}$ Then, $M\left(h+rs,c\right)\subseteq M\left(h,c\right)\text{.}$ All embeddings of Verma modules arise in this way. Therefore, we have the following description of Verma module embeddings. $\phantom{\rule{2em}{0ex}}$Fix a pair $\left(h,c\right)\in {ℝ}^{2},$ and let ${ℒ}_{\left(h,c\right)}$ be one of the lines defined by this pair. Then the Verma module embeddings involving $M\left(h,c\right)$ are described by one of the following four cases. (i) Suppose ${ℒ}_{\left(h,c\right)}$ passes through no integer points. The Verma module $M\left(h,c\right)$ is irreducible and does not embed in any other Verma modules. The block $\left[\left(h,c\right)\right]$ is given by $\left[\left(h,c\right)\right]=\left\{\left(h,c\right)\right\}\text{.}$ (ii) Suppose ${ℒ}_{\left(h,c\right)}$ passes through exactly one integer point $\left(r,s\right)\text{.}$ (a) If $rs>0,$ the embeddings for $M\left(h,c\right)$ look like $• M(h,c) ↑ • M(h+rs,c)$ where the arrow indicates inclusion. (b) If $rs<0,$ the embeddings for $M\left(h,c\right)$ look like $• M(h+rs,c) ↑ • M(h,c)$ The block $\left[\left(h,c\right)\right]$ is given by $\left[\left(h,c\right)\right]=\left\{\left(h,c\right),\left(h+rs,c\right)\right\}\text{.}$ (iii) Suppose ${ℒ}_{\left(h,c\right)}$ passes through infinitely many integer points and crosses an axis at an integer point. Label these points $\left({r}_{i},{s}_{i}\right)$ so that $\dots <{r}_{-2}{s}_{-2}<{r}_{-1}{s}_{-1}<0<{r}_{1}{s}_{1}<{r}_{2}{s}_{2}\dots \text{.}$ (We exclude points $\left(r,s\right)$ where $r=0$ or $s=0\text{;}$ these correspond to the embedding $M\left(h,c\right)=M\left(h+0,c\right)\subseteq M\left(h,c\right)\text{.}\text{)}$ $\left({r}_{-1},{s}_{-1}\right) \left({r}_{1},{s}_{1}\right) \left({r}_{2},{s}_{2}\right) \left({r}_{3},{s}_{3}\right) \left({r}_{4},{s}_{4}\right) \left({r}_{5},{s}_{5}\right) \left({r}_{6},{s}_{6}\right) \left({r}_{7},{s}_{7}\right) \left({r}_{8},{s}_{8}\right) \left({r}_{9},{s}_{9}\right)$ $\phantom{\rule{2em}{0ex}}$The embeddings between the corresponding Verma modules take one of the following forms: $• ↑ • ⋮ •M(h+r-1s-1,c) ↑ •M(h,c) ↑ •M(h+r1s1,c) ⋮ slope (ℒ(h,c))>0 ⋮ •M(h+r-1s-1,c) ↑ •M(h,c) ↑ •M(h+r1s1,c) ⋮ • ↑ • slope (ℒ(h,c))<0$ The block $\left[\left(h,c\right)\right]$ is given by $\left[\left(h,c\right)\right]=\left\{\left(h,c\right),\left(h+{r}_{i}{s}_{i},c\right)\right\}\text{.}$ (iv) Suppose ${ℒ}_{\left(h,c\right)}$ passes through infinitely many integer points and does not cross either axis at an integer point. Again label the integer points $\left({r}_{i},{s}_{i}\right)$ on ${ℒ}_{\left(h,c\right)}$ so that $\dots <{r}_{-2}{s}_{-2}<{r}_{-1}{s}_{-1}<0<{r}_{1}{s}_{1}<{r}_{2}{s}_{2}\dots \text{.}$ Also consider the auxiliary line ${\stackrel{\sim }{ℒ}}_{\left(h,c\right)}$ with the same slope as ${ℒ}_{\left(h,c\right)}$ passing through the point $\left(-{r}_{1},{s}_{1}\right)\text{.}$ Label the integer points on this line $\left({\stackrel{\sim }{r}}_{j},{\stackrel{\sim }{s}}_{j}\right)$ as above. The embeddings between the corresponding Verma modules take one of the forms $\text{slope}\left({ℒ}_{\left(h,c\right)}\right)>0 \text{slope}\left({ℒ}_{\left(h,c\right)}\right)<0 M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-3}{\stackrel{\sim }{s}}_{-3},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-4}{\stackrel{\sim }{s}}_{-4},c\right) M\left(h+{r}_{-1}{s}_{-1},c\right) M\left(h+{r}_{-2}{s}_{-2},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-1}{\stackrel{\sim }{s}}_{-1},c\right) M\left(h,c\right)=M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-2}{\stackrel{\sim }{s}}_{-2},c\right) M\left(h+{r}_{1}{s}_{1},c\right) M\left(h+{r}_{2}{s}_{2},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{1}{\stackrel{\sim }{s}}_{1},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{2}{\stackrel{\sim }{s}}_{2},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-3}{\stackrel{\sim }{s}}_{-3},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-4}{\stackrel{\sim }{s}}_{-4},c\right) M\left(h+{r}_{-1}{s}_{-1},c\right) M\left(h+{r}_{-2}{s}_{-2},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-1}{\stackrel{\sim }{s}}_{-1},c\right) M\left(h,c\right)=M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{-2}{\stackrel{\sim }{s}}_{-2},c\right) M\left(h+{r}_{1}{s}_{1},c\right) M\left(h+{r}_{2}{s}_{2},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{1}{\stackrel{\sim }{s}}_{1},c\right) M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{2}{\stackrel{\sim }{s}}_{2},c\right)$ The block $\left[\left(h,c\right)\right]$ is given by $\left[\left(h,c\right)\right]=\left\{\left(h+{r}_{i}{s}_{i},c\right),\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{j}{\stackrel{\sim }{s}}_{j},c\right)\right\}\text{.}$ See Section 2.5 for more on Jantzen filtrations. ([FFu1990]). Let $\left(h,c\right)\in {ℝ}^{2}$ and classify $\left(h,c\right)$ according to the cases given above. Then the Jantzen filtration of $M\left(h,c\right)$ is given as follows: (i), (iia) $M{\left(h,c\right)}_{j}=0$ for all $j>0\text{.}$ (iib) $M{\left(h,c\right)}_{1}=M\left(h+rs,c\right)$ and $M{\left(h,c\right)}_{j}=0$ for all $j>1\text{.}$ (iii) $M{\left(h,c\right)}_{j}=M\left(h+{r}_{j}{s}_{j},c\right)$ and $M{\left(h,c\right)}_{j}=0$ if there is no point $\left({r}_{j},{s}_{j}\right)$ on the line ${ℒ}_{\left(h,c\right)}\text{.}$ We have the following picture of the Jantzen filtration of $M\left(h,c\right)\text{:}$ $M\left(h,c\right) M{\left(h,c\right)}_{1} M{\left(h,c\right)}_{2}$ (iv) Write $n1,1 = r1s1 n1,2 = r2s2 n2,1 = r1s1+ r∼1s∼1 n2,2 = r1s1+ r∼2s∼2 n3,1 = r3s3 n3,2 = r4s4 ⋮ ⋮$ Then $M{\left(h,c\right)}_{j}=M\left(h+{n}_{j,1},c\right)+M\left(h+{n}_{j,2},c\right),$ and $M\left(h+{n}_{j,1},c\right)\cap M\left(h+{n}_{j,2},c\right)=M{\left(h,c\right)}_{j+1}\text{.}$ We have the following picture of the Jantzen filtration of $M\left(h,c\right)\text{:}$ $M\left(h,c\right) M{\left(h,c\right)}_{1} M{\left(h,c\right)}_{2} M{\left(h,c\right)}_{3}$ Partial Proof of Theorems 3.5.1 and 3.5.2. We give a proof of cases (i) and (ii) for both theorems simultaneously. We note that if we set $\gamma =\left(1,0\right),$ then for any $\lambda \in {𝔥}^{*},$ ${⟨,⟩}_{\lambda +t\gamma }$ will be nondegenerate. Therefore, Theorem 2.5.1 holds. Case (i): Suppose ${ℒ}_{\left(h,c\right)}$ passes through no integer points. Then, $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}\ne 0$ for all $n\in {ℤ}_{\ge 0}$ and so $M\left(h,c\right)$ is irreducible. Case (ii)a: Suppose ${ℒ}_{\left(h,c\right)}$ passes through one integer point $\left(r,s\right)$ with $rs>0\text{.}$ Since ${𝒞}_{r,s}\left(h,c\right)={𝒞}_{-r,-s}\left(h,c\right),$ we can assume that $r,s\in {ℤ}_{>0}$ and that $r$ and $s$ are the only positive integers such ${𝒞}_{r,s}\left(h,c\right)=0\text{.}$ Therefore, $M\left(h+rs,c\right)\subseteq M\left(h,c\right)\text{.}$ This means $M\left(h+rs,c\right)\subseteq J\left(h,c\right),$ and so $dim J(h,c)(h+n,c)≥ dim M(h+rs,c)(h+n,c)= p(n-rs)$ for all $x\in {ℤ}_{\ge 0}\text{.}$ However, from Theorem 2.5.1, $dim J(h,c)(h+n,c)≤ ∑j∈ℤ>0dim M(h,c)j(h+n,c)= ord(det M(h+t,c)(h+t+n,c)) =p(n-rs).$ Therefore $M\left(h+rs,c\right)=J\left(h,c\right)=M{\left(h,c\right)}_{1}$ and $M{\left(h,c\right)}_{j}=0$ for $j>1\text{.}$ Since ${𝒞}_{r,s}\left(h+rs,c\right)={𝒞}_{-r,s}\left(h,c\right),$ there are no integers $r,s\in {ℤ}_{>0}$ such that ${𝒞}_{r,s}\left(h+rs,c\right)=0\text{.}$ This implies $M\left(h+rs,c\right)$ is irreducible. Case (ii)b: Suppose ${ℒ}_{\left(h,c\right)}$ passes through one integer point $\left(r,s\right)$ with $rs<0\text{.}$ Since ${𝒞}_{r,s}\left(h,c\right)={𝒞}_{-r,s}\left(h+rs,c\right),$ the point $\left(-r,s\right)$ is on the line ${ℒ}_{\left(h,c\right)}$ (if we choose the line ${ℒ}_{\left(h,c\right)}$ carefully out of the four possible lines.) Also, $\left(-r,s\right)$ is the only integer point on ${ℒ}_{\left(h-rs,c\right)}\text{.}$ Then $\left(h-rs,c\right)$ falls into case (ii)a. We do not provide a proof of cases (hi) and (iv). The proof of these cases can be found in [FFu1990], Part II, Section 1. However, we do make a few comments to show that these results are reasonable. Case (iii): Suppose ${ℒ}_{\left(h,c\right)}$ passes through infinitely many integer points $\left({r}_{i},{s}_{i}\right)$ and crosses an axis at an integer point. We will assume the slope $\mu$ is positive and ${ℒ}_{\left(h,c\right)}$ passes through a the point $\left(0,{s}_{0}\right)$ for some ${s}_{0}\in {ℤ}_{>0}\text{.}$ (If $\mu <0$ or ${ℒ}_{\left(h,c\right)}$ crosses the axis at a different point, we can still make arguments similar to those below.) Write ${s}_{0}=kp+\stackrel{‾}{{s}_{0}}$ where $k\in {ℤ}_{\ge 0}$ and $0\ge \stackrel{‾}{{s}_{0}} We observe that $\left({r}_{1},{s}_{1}\right)=\left(-\left(k+1\right)q,\stackrel{‾}{{s}_{0}}-p\right)\text{.}$ (If $\stackrel{‾}{{s}_{0}}=0,$ then there are two points on the line ${ℒ}_{\left(h,c\right)},$ $\left(-\left(k+1\right)q,-p\right)$ and $\left(q,\left(k+1\right)p\right),$ with the same product. In this case, there is not a unique choice for $\left({r}_{1},{s}_{1}\right)\text{.}$ We choose either point.) We have $M\left(h+{r}_{1}{s}_{1},c\right)\subseteq M\left(h,c\right),$ and $\left({r}_{1},-{s}_{1}\right)$ is on the line ${ℒ}_{\left(h+{r}_{1}{s}_{1},c\right)}$ (for a careful choice of this line). The line ${\stackrel{\sim }{ℒ}}_{\left(h,c\right)}={ℒ}_{\left(h+{r}_{1}{s}_{1},c\right)}$ so passes through infinitely many integer points and crosses an axis at an integer point, so that we can use the same arguments as above. We see that $\left({\stackrel{\sim }{r}}_{1},{\stackrel{\sim }{s}}_{1}\right)=\left(-\left(k+2\right)q,-\stackrel{‾}{{s}_{0}}\right),$ which implies $M\left(h+{r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{1}{\stackrel{\sim }{s}}_{1},c\right)\subseteq M\left(h+{r}_{1}{s}_{1},c\right)\subseteq M\left(h,c\right)\text{.}$ Since ${r}_{1}{s}_{1}+{\stackrel{\sim }{r}}_{1}{\stackrel{\sim }{s}}_{1}={r}_{2}{s}_{2},$ we have $M\left(h+{r}_{2}{s}_{2},c\right)\subseteq M\left(h+{r}_{1}{s}_{1},c\right)\subseteq M\left(h,c\right)\text{.}$ We can continue this argument to get $M\left(h,c\right)\supseteq M\left(h+{r}_{1}{s}_{1},c\right)\supseteq M\left(h+{r}_{2}{s}_{2},c\right)\supseteq M\left(h+{r}_{3}{s}_{3}\right)\subseteq \cdots ,$ To show that $M\left(h,c\right)\subseteq M\left(h+{r}_{-1}{s}_{-1},c\right)\subseteq M\left(h+{r}_{-2}{s}_{-2},c\right)\subseteq \cdots ,$ we use the fact that ${𝒞}_{r,s}\left(h,c\right)={𝒞}_{-r,s}\left(h+rs,c\right)$ and apply the above argument to the Verma modules $M\left(h+{r}_{-i}{s}_{-i},c\right)\text{.}$ Therefore, the Verma module embeddings for $M\left(h,c\right)$ are at least those indicated in Theorem 3.5.1. Case (iv): We again have $M\left(h+{r}_{1}{s}_{1},c\right)\subseteq M\left(h,c\right)\text{.}$ Since ${ℒ}_{\left(h,c\right)}$ does not cross at axis at an integer point, $\text{det} M{\left(h+{r}_{1}{s}_{1},c\right)}^{\left(h+{r}_{2}{s}_{2},c\right)}\ne 0\text{.}$ Therefore, $M{\left(h+{r}_{1}{s}_{1},c\right)}^{\left(h+{r}_{2}{s}_{2},c\right)}\cap M{\left(h,c\right)}_{j}=0$ for $j>0\text{.}$ However, $\text{ord} M{\left(h+t,c\right)}^{\left(h+t+{r}_{2}{s}_{2},c\right)}=p\left({r}_{2}{s}_{2}-{r}_{1}{s}_{1}\right)+1=\text{dim} M{\left(h+{r}_{1}{s}_{1},c\right)}^{\left(h+{r}_{2}{s}_{2},c\right)}+1\text{.}$ Using Theorem 2.5.1, we see that there must be some vector $0\ne v\in M{\left(h,c\right)}_{1}^{\left(h+{r}_{2}{s}_{2},c\right)}$ so that $v\ne M\left(h+{r}_{1}{s}_{1},c\right)\text{.}$ It remains to show that $v$ is a highest weight vector. $\square$ #### Another Description of Blocks We can use the line ${ℒ}_{\left(h,c\right)}$ to generate lines corresponding to the entire block $\left[\left(h,c\right)\right]$ in the following way. If $\left(r,s\right)$ is an integer point on the line ${ℒ}_{\left(h,c\right)}$ let ${\stackrel{\sim }{ℒ}}_{\left(h,c\right)}$ be the line with the same slope as ${ℒ}_{\left(h,c\right)}$ and passing through the point $\left(-r,s\right)\text{.}$ Then ${\stackrel{\sim }{ℒ}}_{\left(h,c\right)}={\stackrel{\sim }{ℒ}}_{\left(h+rs,c\right)}$ corresponds to the weight $\left(h+rs,c\right)\in \left[\left(h,c\right)\right]\text{.}$ Using this approach we can construct a set of lines corresponding to the weights in a given block. In this section, we begin with a line and generate the weights in a given block. Let ${ℒ}^{\left(\mu ,a,b\right)}$ be a line where $\mu$ is the slope of the line and $\left(a,b\right)$ is a point on the line. Then ${ℒ}^{\left(\mu ,a,b\right)}$ determines a weight $\left(h,c\right)$ by $h=(aμ-b)2-(μ-1)24μ, c=13-6(μ+1μ).$ We will write $\left[\left(\mu ,a,b\right)\right]$ for $\left[\left(h,c\right)\right]$ if ${ℒ}^{\left(\mu ,a,b\right)}$ determines $\left(h,c\right)\text{.}$ • Blocks of size two are indexed by triples ${ (μ,a,b) | μ∈ ℝ-ℚ with ∣μ∣ <1 and a,b∈ℤ>0 } ∪ { (μ,a,a) | μ∈ℂ-ℚ,∣μ∣ =1,a∈ℤ>0 } .$ The weights in a block of size two $\left[\left(\mu ,a,b\right)\right]$ are indexed by triples $\left\{\left(\mu ,a,±b\right)\right\}\text{.}$ • Infinite blocks with a maximal element are indexed by triples ${ (pq,a,b) | p,q∈ℤ>0, with gcd(p,q)=1,p Infinite blocks with a minimal element are indexed by triples ${ (-pq,-a,b) | p,q∈ℤ>0, with gcd(p,q)=1,p For a block $\left[\left(±\frac{p}{q},±a,b\right)\right]$ with $a\ne 0,$ the weights in the block are indexed by triples ${(pq,a,±b+2kp) | k∈ℤ} or {(-pq,-a,±b+2kp) | k∈ℤ}.$ For a block $\left[\left(±\frac{p}{q},0,b\right)\right],$ the weights in the block are indexed by triples ${(pq,0,b),(pq,0,±b+2kp) | k∈ℤ>0} or {(-pq,0,b),(-pq,0,±b+2kp) | k∈ℤ>0}.$ Proof. Suppose $\left(\mu ,a,b\right)\to \left(h,c\right)$ and $\mid \left[\left(h,c\right)\right]\mid >1\text{.}$ Then, ${ℒ}^{\left(\mu ,a,b\right)}$ must pass through at least one integer point. Therefore, we can restrict to triples $\left(\mu ,a,b\right)$ with $a,b\in ℤ\text{.}$ We now consider what values of $\mu$ will determine real values for $h$ and $c\text{.}$ Note that $c=13-6\left(\mu +\frac{1}{\mu }\right)\in ℝ$ only if $\mu \in ℝ$ or $\mu \in ℂ$ with $\mid \mu \mid =1\text{.}$ Suppose $\mu \in ℂ-ℝ$ with $\mid \mu \mid =1\text{.}$ Then $\mu =A+Bi=$ with $B\ne 0\text{.}$ It is straightforward to check that $h=\frac{{\left(a\mu -b\right)}^{2}-{\left(\mu -1\right)}^{2}}{4\mu }\in ℝ$ only if ${a}^{2}={b}^{2}\text{.}$ Recall that $\left(\mu ,a,b\right),$ $\left(\mu ,-a,-b\right),$ $\left(\frac{1}{\mu },a,b\right),$ and $\left(\frac{1}{\mu },-a,-b\right)$ all determine to the same weight $\left(h,c\right)\text{.}$ Therefore, we restrict our attention to triples $\left(\mu ,a,b\right)$ with $\mu \in ℝ$ so that $0<\mid \mu \mid <1$ and $a\in {ℤ}_{>0},$ $b\in {ℤ}_{\ne 0}\text{;}$ or with $\mu \in ℂ$ so that $\mid \mu \mid =1,$ $a\in {ℤ}_{>0},$ and $b=±a\text{.}$ • We first consider blocks of size two. A pair $\left(h,c\right)$ belonging to a block of size two lies on exactly one curve ${𝒞}_{r,s}\left(h,c\right)=0,$ and so any line determining the pair $\left(h,c\right)$ passes through exactly one integer point. Therefore, triples in the set $\left\{\left(\mu ,a,b\right) | \mu \in ℝ-ℚ \text{with} \mid \mu \mid <1 \text{and} a\in {ℤ}_{>0},b\in ℤ\right\}\cup \left\{\left(\mu ,a,±a\right) | \mu \in ℂ-ℚ,\mid \mu \mid =1,a\in {ℤ}_{>0}\right\}$ are in one-to-one correspondence with such pairs $\left(h,c\right)\text{.}$ $\phantom{\rule{2em}{0ex}}$If $\left(h,c\right)$ is the pair defined by $\left(\mu ,a,b\right),$ then $M\left(h+ab,c\right)\subseteq M\left(h,c\right)\text{.}$ This implies that $\left(h+ab,c\right)$ corresponds to the line with slope $\mu$ passing through the point $\left(-a,b\right)\text{.}$ Therefore, any block of size two can be identified with a set $\left\{\left(\mu ,a,±b\right)\right\},$ with $\mu ,$ $a,$ and $b$ as in the previous paragraph. Taking one triple from each of these pairs of triples, we see that the set ${ (μ,a,b) | μ∈ℝ-ℚ with ∣μ∣ <1 and a,b∈ℤ>0 } ∪ { (μ,a,a) | μ∈ℂ-ℚ,∣μ∣ =1,a∈ℤ>0 }$ indexes the blocks of size two. • Now we consider infinite blocks. Let $\left(h,c\right)$ be a pair in an infinite block. We assume $0<\mu \le 1$ and $\mu \in ℚ\text{.}$ (The arguments for $1\le \mu <0$ are the similar.) Write $\mu =\frac{p}{q}$ such that $p$ and $q$ are relatively prime. Consider weights $\left(h,c\right)$ which are maximal in their own block $\left[\left(h,c\right)\right]\text{.}$ Since $M\left(h,c\right)$ does not embed in any other Vermas, any line determined by $\left(h,c\right)$ must pass through only integer points $\left(a,b\right)$ such that $ab>0\text{.}$ It is clear that the triples corresponding to maximal weights are contained in the set $\left\{\left(\mu ,a,b\right) | 0\le a<\frac{q}{2},0\le b (or $\left\{\left(\mu ,a,b\right) | 0\le a if $q$ is even). However, $\left(\frac{p}{q},a,b\right)$ and $\left(\frac{p}{q},q-a,p-b\right)$ determine the same weight. Therefore, the set $\left\{\left(\mu ,a,b\right) | 0\le a<\frac{q}{2},0\le b (or $\left\{\left(\mu ,a,b\right) | 0\le a contains exactly one triple corresponding to each such pair $\left(h,c\right)\text{.}$ $\phantom{\rule{2em}{0ex}}$We can also describe the block $\left[\left(h,c\right)\right]\text{.}$ Let $\left(\mu ,a,b\right)$ $\text{(}\mu =\frac{p}{q}\in {ℚ}_{\ne 0}$ with $\mid \mu \mid \le 1$ and $a,b\in ℤ$ with $0\le a<\frac{q}{2}$ be a triple which determines $\left(h,c\right)\text{.}$ Then the integer points lying on ${ℒ}^{\left(\frac{p}{q},a,b\right)}$ are $\left(a+kq,b+kp\right),$ $k\in ℤ\text{.}$ This implies that $M\left(h+\left(a+kq\right)\left(b+kp\right),c\right)\subseteq M\left(h,c\right)\text{.}$ Therefore, the line given by $\left(\mu ,-a+kq,-\left(b+kp\right)\right)$ must determine $\left(h+\left(a+kq\right)\left(b+kp\right),c\right)\in ℬ\text{.}$ This may not produce all pairs $\left(h,c\right)$ in the block (as in case (iv) of Theorem 3.5.1). Therefore, we also consider the pair $\left(h+ab,c\right)\in ℬ,$ which is determined by the triple $\left(\mu ,a,-b\right)\text{.}$ Using the same argument as above, we get the triples $\left(\mu ,a,-b\right)\text{.}$ Therefore, the set ${ (pq,a-kq,±b+kp) | k∈ℤ } ⟷ { (pq,a,±b+2kp) | k∈ℤ }$ is in general a set of representatives for the elements of the block corresponding to $\left(\frac{p}{q},a,b\right)\text{.}$ If $a=0,$ the triples $\left(\frac{p}{q},0,b+2kp\right)$ and $\left(\frac{p}{q},0,-b-2kp\right)$ correspond to distinct lines but still determine the same weight. In this case, the set $\left\{\left(\frac{p}{q},0,±b+2kp\right) | k\in {ℤ}_{\ge 0}\right\}$ forms set of representatives of the elements of the block. $\phantom{\rule{2em}{0ex}}$Consider the example with $\mu =\frac{2}{3}\text{.}$ $s=8 s=4 s=0 s=-4 {ℒ}^{\left(\frac{2}{3},-2,1\right)} {ℒ}^{\left(\frac{2}{3},1,1\right)} {ℒ}^{\left(\frac{2}{3},1,-1\right)} {ℒ}^{\left(\frac{2}{3},4,-3\right)} \left(-2,-1\right) \left(1,1\right) \left(4\text{.}3\right)$ The set of integer points $\left\{\left(a,b\right)\in {ℤ}^{2} | 0\le a<\frac{3}{2},0\le b<2\right\}$ indexes the infinite blocks with $c=13-6\left(\frac{2}{3}+\frac{3}{2}\right)=0\text{.}$ The line ${ℒ}^{\left(\frac{2}{3},1,1\right)}$ determines the weight $\left(0,0\right)\text{.}$ From the integer points $\left(1,1\right),$ $\left(-2,-1\right),$ and $\left(4,3\right)$ on the line ${ℒ}^{\left(\frac{2}{3},1,1\right)},$ we get the lines ${ℒ}^{\left(1,-1\right)},$ ${ℒ}^{\left(\frac{2}{3},-2,1\right)},$ and ${ℒ}^{\left(\frac{2}{3},4,-3\right)}\text{;}$ these lines determine the weights $\left(1,0\right),$ $\left(2,0\right),$ and $\left(12,0\right)$ respectively. In general, the set of points $\left\{\left(1,4k±1\right) | k\in ℤ\right\}$ correspond to the block ${((12k+2±3)2-124,0) | k∈ℤ}= {(j(3j±1)2,0) | j∈ℤ≥0}.$ $\square$ Define the group $W=⟨{s}_{0},{s}_{1} | {s}_{i}^{2}=1⟩\text{.}$ We can define an action of $W$ on the triples $\left(\mu ,a,b\right)$ so that (i) a block of size two $\left[\left(\mu ,a,b\right)\right]$ is the orbit of the subgroup $⟨{s}_{0}⟩\subseteq W\text{;}$ (ii) a infinite block $\left[\left(±\frac{p}{q},±a,b\right)\right],$ with $a\ne 0,$ is the orbit of $W\text{;}$ (iii) a infinite block $\left[\left(±\frac{p}{q},0,b\right)\right],$ $b\ne 0,$ is the orbit of the subgroup $⟨{s}_{1},{s}_{0}{s}_{1}{s}_{0}⟩\subseteq W\text{;}$ (iv) a infinite block $\left[\left(±\frac{p}{q},0,0\right)\right]$ is of the form $\left\{{\left({s}_{1}{s}_{0}\right)}^{k}\left(±\frac{p}{q},0,0\right) | k\in {ℤ}_{\ge 0}\right\}\text{.}$ Proof. We define an action of $W$ on triples $\left(\mu ,a,b\right)$ as follows: • ${s}_{0}$ is the reflection about 0: ${s}_{0}\left(\mu ,a,b\right)=\left(\mu ,a,-b\right)\text{;}$ • for $\mu =\frac{p}{q},$ ${s}_{1}$ is the reflection about $p\text{:}$ ${s}_{1}\left(±\frac{p}{q},±a,b\right)=\left(±\frac{p}{q},±a,-\left(b-p\right)+p\right)=\left(±\frac{p}{q},±a,-b+2p\right)\text{.}$ Then (i) and (ii) follow from the previous proposition. For (iii), note that ${s}_{0}{s}_{1}{s}_{0}$ is the reflection about $-p\text{.}$ Also, $\left(\frac{p}{q},0,-b+2kp\right)$ and $\left(\frac{p}{q},0,b-2kp\right)$ determine the same weight. Then $⟨{s}_{1},{s}_{0}{s}_{1}{s}_{0}⟩$ generates $\left[\left(±\frac{p}{q},0,b\right)\right],$ where we replace $\left(\frac{p}{q},0,-b+2kp\right)$ with $\left(\frac{p}{q},0,b-2kp\right)$ for $k$ even. Finally, we have that ${s}_{1}{s}_{0}$ is translation by $2p\text{.}$ Then, (iv) follows. $\square$ ### Translation Functors We now consider $M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\text{.}$ From Theorem 2.3.12, we know that $M(h,c)⊗ L(h′,c′)= ⨁[μ]∈[𝔥*] (M(h,c)⊗L(h′,c′))[μ]$ and ${\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\ne 0$ only if $\left[\mu \right]=\left[\left(h+h\prime +k,c+c\prime \right)\right]$ for some $k\in {ℤ}_{\ge 0}\text{.}$ Moreover, we know this submodule has a filtration by Verma modules. In this section, we use the contravariant form to better describe ${\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\text{.}$ Recall $⟨,⟩:M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)×M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\to ℂ$ is defined by $⟨v⊗w,v′⊗w′⟩= ⟨v,v′⟩⟨w,w′⟩$ where $v,v\prime \in M\left(h,c\right)$ and $w,w\prime \in L\left(\stackrel{\sim }{h},c\prime \right)\text{.}$ This form on $M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)$ is contravariant. Let $\left(h,c\right),\left(h\prime ,c\prime \right)\in {ℝ}^{2},$ and let $\left\{{w}_{k,j} | 1\le j\le \text{dim}\left(L{\left(h\prime ,c\prime \right)}^{\left(h\prime +k,c\prime \right)}\right)\right\}$ be a basis for $L{\left(h\prime ,c\prime \right)}^{\left(h\prime +k,c\prime \right)}\text{.}$ From Lemma 2.6.2, the following sets are bases for ${\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{:}$ ${ d-λv+⊗ wk,i | ∣λ∣=n, 1≤i≤dim (L(h′,c′)(h′+k,c′)) } ; (3.16)$ ${ d-λ (v+⊗wk,i) | ∣λ∣=n, 1≤i≤dim (L(h′,c′)(h′+k,c′)) } . (3.17)$ We defined $det(M(h,c)⊗L(h′,c′))(h+h′+k,c+c′)≔ det(⟨d-λv+⊗wm,j,d-λ*v+⊗wm′,j′⟩)$ where the entries in the matrix are indexed over partitions $\lambda$ and ${\lambda }^{*}$ and positive integers $m,m\prime ,j,j\prime$ such that $\mid \lambda \mid =k-m,$ $\mid {\lambda }^{*}\mid =k-m\prime ,$ $1\le j\le \text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +m,c\prime \right)},$ and $1\le j\prime \le \text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +m\prime ,c\prime \right)}\text{.}$ From Lemma 2.6.3, we have $det(M(h,c)⊗L(h′,c′))(h+h′+k,c+c′) = ∏j≤k (det M(h,c)(h+k-j,c)) dim L(h′,c′)(h′+j,c′) × (det L(h′,c′)h′+j,c′) p(k-j) . (3.18)$ For $\left(h,c\right),\left(h\prime ,c\prime \right)\in {ℝ}^{2}$ and $k\in {ℤ}_{\ge 0},$ $det(M(h,c)⊗L(h′,c′))(h+h′+k,c+c′)$ is given by $∏0≤j≤k (det M(h+h′+j,c+c′)(h+h′+k,c+c′)) dim L(h′,c′)(h′+j,c′) (aj(h′,c′)(h,c) det L(h′,c′)(h′+j,c′))p(k-j),$ where $aj(h′,c′)(h,c)= ∏1≤r≤srs≤j (𝒞r,s(h,c)𝒞r,s(h+h′+j-rs,c+c′)) dim L(h′,c′)(h′+j-rs,c′) .$ Proof. Using Equation 3.18, we only need to show that $∏j≤k (aj(h′,c′)(h,c)) p(k-j) = ∏j≤k ( det M(h,c)(h+k-j,c) det M(h+h′+j,c+c′)(h+h′+k,c+c′) ) dim L(h′,c′)(h′+j,c′) .$ Note that $∏0≤j≤k ( det M(h,c)(h+k-j,c) det M(h+h′+j,c+c′)(h+h′+k,c+c′) ) dim L(h′,c′)(h′+j,c′) = ∏0≤j≤k1≤r≤s (𝒞r,s(h,c)𝒞r,s(h+h′+j,c+c′)) p(k-j-rs)dim L(h′,c′)(h′+j,c′) = ∏j∈ℤ1≤r≤s (𝒞r,s(h,c)𝒞r,s(h+h′+j,c+c′)) p(k-j-rs)dim L(h′,c′)(h′+j,c′) ⏟ We can let the product range over all j∈ℤ since p(k-j-rs)=0 for j>k and dim L(h′,c′)(h′+j,c′)=0 for j<0. = ∏j∈ℤ1≤r≤s (𝒞r,s(h,c)𝒞r,s(h+h′+j-rs,c+c′)) p(k-j)dim L(h′,c′)(h′+j-rs,c′) ⏟ We shift j→j-rs = ∏0≤j≤k1≤r≤srs≤j (𝒞r,s(h,c)𝒞r,s(h+h′+j-rs,c+c′)) p(k-j)dim L(h′,c′)(h′+j-rs,c′) .$ $\square$ Fix $\left(h\prime ,c\prime \right)\in {ℝ}^{2}\text{.}$ Consider $\left(h,c\right)\in {ℝ}^{2}$ such that $\left(h+n,c\right)\notin \left[\left(h,c\right)\right]$ for any $n\in {ℤ}_{>0}$ (i.e. $M\left(h,c\right)$ is irreducible). For each $\left[\mu \right]\in \left[{𝔥}^{*}\right]$ and $n\in {ℤ}_{\ge 0},$ there is a projection map $Prn[μ]: (M(h,c)⊗L(h′,c′)) (h+h′+n,c+c′) → ((M(h,c)⊗L(h′,c′))[μ]) (h+h′+n,c+c′)$ given by $Prn[μ](w)=w- ∑[γ]≠[μ] ∑1m⟨w,vi[γ]⟩ v‾i[γ]$ where $\left\{{v}_{1}^{\left[\gamma \right]},\dots ,{v}_{m}^{\left[\gamma \right]}\right\}$ and $\left\{{\stackrel{‾}{v}}_{1}^{\left[\gamma \right]},\dots ,{\stackrel{‾}{v}}_{m}^{\left[\gamma \right]}\right\}$ are dual bases for ${\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\gamma \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{.}$ Proof. Since $\left(h+n,c\right)\notin \left[\left(h,c\right)\right]$ for all $n\in {ℤ}_{\ge 0},$ Equation 3.18 implies that the contravariant form is nondegenerate on ${\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{.}$ From Proposition 2.7, distinct blocks are orthogonal with respect to the form. Therefore, the contravariant form is nondegenerate on each block. Let $\left\{{v}_{1}^{\left[\gamma \right]},\dots ,{v}_{m}^{\left[\gamma \right]}\right\}$ be a basis for ${\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\gamma \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{.}$ Since the contravariant form is nondegenerate on this space, there is a dual basis $\left\{{\stackrel{‾}{v}}_{1}^{\left[\gamma \right]},\dots ,{\stackrel{‾}{v}}_{m}^{\left[\gamma \right]}\right\}$ for this space, i.e. $⟨{v}_{i}^{\left[\gamma \right]},{\stackrel{‾}{v}}_{l}^{\left[\gamma \right]}⟩={\delta }_{i,l}\text{.}$ We define a map $Prn[μ]: (M(h,c)⊗L(h′,c′)) (h+h′+n,c+c′) → (M(h,c)⊗L(h′,c′)) (h+h′+n,c+c′)$ by $Prn[μ](w)=w- ∑[γ]≠[μ] ∑1m⟨w,vi[γ]⟩ v‾i[γ].$ Note that $⟨{\text{Pr}}_{n}^{\left[\mu \right]}\left(w\right),{v}_{i}^{\left[\gamma \right]}⟩=0$ whenever $\left[\gamma \right]\ne \left[\mu \right]$ since distinct blocks are orthogonal. Therefore, ${\text{Pr}}_{n}^{\left[\mu \right]}\left(w\right)\in {\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{.}$ Also, for $w\in {\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{,}$ ${\text{Pr}}_{n}^{\left[\mu \right]}\left(w\right)=w\text{.}$ $\square$ Fix $\left(h\prime ,c\prime \right)\in {ℝ}^{2}$ and $n\in {ℤ}_{\ge 0}\text{.}$ Suppose $\left(h,c\right)\in {ℝ}^{2}$ is such that $\left(h+j,c\right)\notin \left[\left(h,c\right)\right]$ and $\left(h+h\prime +j,c+c\prime \right)\notin \left[\left(h+h\prime +k,c+c\prime \right)\right]$ for all $j,k\le n$ with $j\ne k\text{.}$ $\phantom{\rule{2em}{0ex}}$Then the submodule of $M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)$ generated by $⨁0≤j≤n (M(h,c)⊗L(h′,c′)) (h+h′+j,c+c′)$ is isomorphic to $⨁0≤j≤n M(h+h′+j,c+c′)⊕dimL(h′,c′)(h′+j,c′).$ For a suitable choice of generating highest weight vectors $\left\{{v}_{j,i}^{+} | 1\le i\le \text{dim}{\left(L\left(h\prime ,c\prime \right)\right)}^{\left(h\prime +j,c\prime \right)}\right\}$ of $\underset{0\le j\le n}{⨁}M{\left(h+h\prime +j,c+c\prime \right)}^{\oplus \text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +j,c\prime \right)}}\subseteq M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right),$ this sum is orthogonal with respect to the contravariant form on $M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right),$ and $∏1≤i≤dim(L(h′,c′))(h′+j,c′) ⟨vj,i+,vj,i+⟩= aj(h′,c′)(h,c) det L(h′,c′)(h′+j,c′).$ Proof. Since $\left[\left(h,c\right)\right]\ne \left[\left(h+j,c\right)\right]$ for all $j\le n,$ the projection maps from the previous lemma are well-defined. We have assumed $\left[\left(h+h\prime +j,c+c\prime \right)\right]\ne \left[\left(h+h\prime +k,c+c\prime \right)\right]$ for $j,k\le n$ and $j\ne k\text{.}$ Therefore, for ${\mu }_{j}=\left(h+h\prime +j,c+c\prime \right)$ with $j\le n,$ the set $\left\{{\text{Pr}}_{j}^{\left[{\mu }_{j}\right]}\left({v}^{+}\otimes {w}_{j,i}\right) | 1\le i\le \text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +j,c\prime \right)}\right\}$ is a basis for ${\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[{\mu }_{j}\right]}\right)}^{\left(h+h\prime +j,c+c\prime \right)},$ made up of highest weight vectors. Choose vectors $\left\{{v}_{j,i}^{+}\right\}$ such that • the transition matrix from $\left\{{\text{Pr}}_{j}^{\left[{\mu }_{j}\right]}\left({v}^{+}\otimes {w}_{j,i}\right)\right\}$ to $\left\{{v}_{j,i}^{+}\right\}$ has determinant 1; • $⟨{v}_{j,i}^{+},{v}_{j,k}^{+}⟩=0$ if $i\ne k\text{.}$ Note that $∏i⟨vj,i+,vj,i+⟩ =det ( ⟨ Prj[μjj] (v+⊗wj,i), Prj[μj] (v+⊗wj,k) ⟩ ) .$ Then $\text{det}\left(⟨{d}_{-\lambda }{\text{Pr}}_{j}^{\left[{\mu }_{j}\right]}\left({v}^{+}\otimes {w}_{j,i}\right),{d}_{-\stackrel{\sim }{\mu }}{\text{Pr}}_{j}^{\left[{\mu }_{j}\right]}\left({v}^{+}\otimes {w}_{j,k}\right)⟩\right)=\prod _{i}\left(⟨{d}_{-\mu }{v}_{j,i}^{+},{d}_{-\stackrel{\sim }{\lambda }}{v}_{j,i}^{+}⟩\right)$ is $(∏i⟨vj,i+,vj,i+⟩)p(n-j)× (det (h+h′+j,c+c′)(h+h′+n,c+c′)) dim L(h′,c′)(h′+j,c′) .$ Therefore, we only need to determine $\prod _{i}⟨{v}_{j,i}^{+},{v}_{j,i}^{+}⟩\text{.}$ We do this inductively. Suppose $det(⟨Prk[μk](v+⊗wk,i),Prk[μk](v+⊗wk,l)⟩) =ak(h′,c′)(h,c) det L(h′,c′)(h′+k,c′)$ for $k Since distinct blocks are orthogonal, we have $\text{det}{\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left(h+h\prime +j,c+c\prime \right)}$ is given by $∏k≤jdet ( ⟨ d-μPrj[γk] (v+⊗d-λw+), d-μ∼Prj[γk] (v+⊗d-λ∼w+) ⟩ ) = det ( ⟨ d-μPrj[μ] (v+⊗d-λw+), d-μ∼Prj[μ] (v+⊗d-λ∼w+) ⟩ ) ∏k From Lemma 3.6.1, this implies $det (⟨Prj[μj](v+⊗wj,i),Prj[μj](v+⊗wj,l)⟩)= aj(h′,c′)(h,c) det L(h′,c′)(h′+j,c′).$ $\square$ For $\gamma =\left(h+h\prime +k,c+c\prime \right),$ the set $Bn[γ]= { d-μ(Prj[γ](v+⊗d-λw+)) | (h+h′+j,c+c′)∈ [γ],∣λ∣=j, ∣μ∣=n-j }$ is a basis for ${\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\gamma \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}\text{.}$ Define $det((M(h,c)⊗L(h′,c′))[γ])(h+h′+n,c+c′) =det(⟨v,w⟩)v,w∈Bn[γ]$ Let $\left(h,c\right),\left(h\prime ,c\prime \right)\in {ℝ}^{2},$ $\left[\gamma \right]\in \left[{𝔥}^{*}\right],$ and $n\in {ℤ}_{\ge 0}\text{.}$ Suppose $\left(h+k,c\right)\notin \left[\left(h,c\right)\right]$ for all $0\le k\le n\text{.}$ Then, $det((M(h,c)⊗L(h′,c′))[γ])(h+h′+n,c+c′)$ is $= ∏(h+h′+j,c+c′)∈[γ] ( (det M(h+h′+j,c+c′)(h+h′+n,c+c′))dim L(h′,c′)(h′+j,c′) × (aj(h′,c′)(h,c) det L(h′,c′)(h′+j,c))p(n-j) )$ Proof. Let $n\in {ℤ}_{\ge 0}$ and let $K$ be any set of positive integers between $0$ and $n\text{.}$ Fix $\left(h\prime ,c\prime \right)\in {ℝ}^{2}$ and consider all $\left(h,c\right)\in {ℝ}^{2}$ such that $\left[\left(h+h\prime +k,c+c\prime \right)\right]\ne \left[\left(h+h\prime +k\prime ,c+c\prime \right)\right]$ for any $k\prime$ such that $k\prime \notin K\text{.}$ Let $MK=∑k∈K (M(h,c)⊗L(h′,c′))(h+h′+k,c+c′)$ We can construct projection maps ${\text{Pr}}_{j}^{K}:{\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left(h+h\prime +j,c+c\prime \right)}\to {\left({M}^{K}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ analogous to those in Propostion 3.6.2. Applying these projection maps to the basis to $\left\{{v}^{+}\otimes {w}_{i}^{j}\right\},$ we can construct a basis $\left\{{v}_{1},\dots ,{v}_{m}\right\}$ for ${\left({M}^{K}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ which are linear combinations of the basis $\left\{{d}_{-\lambda }\left({v}^{+}\otimes {w}_{i}^{j}\right) | j\le n,\mid \lambda \mid =n-j\right\}$ with coefficients which are rational functions of $h$ and $c\text{.}$ Consider $det(MK)(h+h′+n,c+c′) =det(⟨vi,vj⟩)1≤i,j≤m.$ This will be a rational function in $h$ and $c\text{.}$ For most choices of $\left(h,c\right),$ $\left[\left(h+h\prime +k,c+c\prime \right)\right]=\left\{\left(h+h\prime +k,c+c\prime \right)\right\}$ for each $k\in K$ and so $MK≅⨁k∈KM (h+h′+k,c+c′)⊕dim L(h′,c′)(h′+k,c′).$ Write ${\gamma }_{k}=\left(h+h\prime +k,c+c\prime \right)\text{.}$ Lemma 3.6.3 implies that for such choices of $h$ and $c,$ $det(MK)(h+h′+n,c+c′) = ∏k∈Kdet ((M(h,c)⊗L(h′,c′))[γk])(h+h′+n,c+c′) (3.19) = ∏k∈K (ak(h′,c′)(h,c)L(h′,c′)(h′+k,c′))p(n-k) M(h+h′+k,c+c′)(h+h′+n,c+c′). (3.20)$ Since $\text{det}{\left({M}^{K}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ is a rational function of $h$ and $c,$ Equation 3.20 holds for all $\left(h,c\right)$ where $\text{det}{\left({M}^{K}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ is defined. In particular, if $\left[\gamma \right]\cap \left\{\left(h+h\prime +j,c+c\prime \right) | 0\le j\le n\right\}=\left\{\left(h+h\prime +k,c+c\prime \right) | k\in K\right\},$ then $\text{det}{\left({\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\gamma \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ is $∏j∈[γ] (aj(h′,c′)(h,c) det L(h′,c′)(h′+j,c′))p(n-j) (det M(h+h′+j,c+c′)(h+h′+n,c+c′))dim L(h′,c′)(h′+j,c′).$ $\square$ We define a Jantzen-type filtration on $M\left(\lambda \right)\otimes L\left(\mu \right)$ in the following way. For an indeterminant $t,$ we define the Vir-module $M\left(h+t,c\right)$ as in Section 2.5. The map $\epsilon :ℂ\left[t\right]\to ℂ$ $\left(t↦0\right)$ to a map $ε:M(h+t,c)⊗ L(h′,c′)⟶ M(h,c)⊗L(h′,c′).$ For each $j\in {ℤ}_{\ge 0},$ define $(M(h+t,c)⊗L(h′,c′))j={v∈M(h+t,c)⊗L(h′,c′) | tj|⟨v,w⟩ for all w∈M(h+t,c)⊗L(h′,c′)}$ and $(M(h,c)⊗L(h′,c′))j= ε((M(h+t,c)⊗L(h′,c′))j).$ Let $j\in {ℤ}_{\ge 0}\text{.}$ Then $(M(h,c)⊗L(h′,c′))j =M(h,c)j⊗L(h′,c′).$ Proof. Let $v\in {\left(M\left(h+t,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}_{j}\text{.}$ Since distinct weight spaces are orthogonal with respect to the contravariant form, we may assume $v\in {\left(M\left(h+t,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left(h+h\prime +t+n,c+c\prime \right)}$ for some $n\in {ℤ}_{\ge 0}\text{.}$ For each $j\le n,$ let $\left\{{w}_{j,i}\right\}$ be a basis for $L\left(h\prime ,c\prime \right)$ which is orthonormal with respect to the contravariant form. (Such a basis exists since the contravariant form is nondegenerate on $L\left(h\prime ,c\prime \right)\text{.)}$ We may write $v=∑j=0n∑i vj,i⊗wj,i$ for some ${v}_{j,i}\in M\left(h+t,c\right)\text{.}$ Then, for any $v\prime \in M\left(h+t,c\right),$ $k\le n,$ and $1\le m\le \text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +k,c\prime \right)},$ $⟨v,v′⊗wk,m⟩ = ∑j=0n∑i ⟨vj,i⊗wj,i,v′⊗wk,m⟩ = ∑j=0n∑i ⟨vj,i,v′⟩ ⟨wj,i,wk,m⟩ = ⟨vk,m,v′⟩.$ This implies ${t}^{j}|⟨{v}_{k,m},w⟩$ for all $w\in M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)$ and so ${v}_{k,m}\in M\left(h+t,c\right)\text{.}$ $\square$ Let $\left(h,c\right),\left(h\prime ,c\prime \right)\in {ℂ}^{2}$ and $\left[\mu \right]\in \left[{𝔥}^{*}\right]\text{.}$ Then, for each $n\in {ℤ}_{\ge 0}$ $∑j>0dim ((M(h,c)⊗L(h′,c′))j[μ])(h+h′+n,c+c′)$ is $ord ( ∏0≤k≤n(h+h′+k,c+c′)∈[μ] (ak(h′,c′)(h+t,c))p(n-k) ) .$ Proof. From the previous lemma and Theorem 3.5.2, we know that for each $n\in {ℤ}_{\ge 0}$ $∑j>0dim ((M(h,c)⊗L(h′,c′))j)(h+h′+n,c+c′)$ is given by $ord ∏0≤k≤n (det M(h,c)(h+t+n-k,c)) dim L(h′,c′)(h′+k,c′) =ord ∏0≤k≤n (ak(h′,c′)(h+t,c))p(n-k). (3.21)$ Therefore, to prove the result, we only need to show how these zeros are distributed. From the previous lemma, we have ${\left(M\left(h,c\right)\otimes L\left(h\prime ,c\prime \right)\right)}_{j}=M{\left(h,c\right)}_{j}\otimes L\left(h\prime ,c\prime \right)\text{.}$ Theorem 3.5.2 gives the structure of the Jantzen filtration for $M\left(h,c\right)\text{.}$ We will consider the cases of this result separately. Case (i): In this case, $M\left(h,c\right)$ is irreducible and so $M{\left(h,c\right)}_{j}=0$ for all $j\text{.}$ This corresponds to $\text{det} M{\left(h,c\right)}^{\left(h+n,c\right)}\ne 0$ for all $n\in {ℤ}_{\ge 0},$ implying $\text{ord}\left({a}_{k}^{\left(h\prime ,c\prime \right)}\left(h+t,c\right)\right)=0\text{.}$ Cases (ii) and (iii): There are integer points $\left({r}_{i},{s}_{i}\right),$ $1\le i\le k$ for some $k\in {ℤ}_{>0},$ on the line ${ℒ}_{\left(h,c\right)}$ such that • $M{\left(h,c\right)}_{j}=M\left(h+{r}_{j}{s}_{j},c\right)$ for $j\le k\text{;}$ • ${\left(M{\left(h,c\right)}_{j}\right)}^{\left(h+m,c\right)}=0$ for $j>k$ and $m\le n\text{.}$ Then we have a correspondence between • distinct zeros in $\text{det} M{\left(h,c\right)}^{\left(h+m,c\right)},$ which will have the form ${𝒞}_{{r}_{j},{s}_{j}}\left(h,c\right)\text{;}$ • $j$ such that ${\left(M{\left(h,c\right)}_{j}\right)}^{\left(h+m,c\right)}\ne 0\text{.}$ Moreover, the multiplicity of the zero ${𝒞}_{{r}_{j},{s}_{j}}\left(h,c\right)$ in $\text{det} M{\left(h,c\right)}^{\left(h+m,c\right)}$ is $p\left(m-{r}_{j}{s}_{j}\right)=\text{dim} M{\left(h+{r}_{j}{s}_{j},c\right)}^{\left(h+m,c\right)}\text{.}$ Now, if $1\le j\le k,$ we can describe the decomposition of $M{\left(h,c\right)}_{j}\otimes L\left(h\prime ,c\prime \right)=M\left(h+{r}_{j}{s}_{j},c\right)\otimes L\left(h\prime ,c\prime \right)$ by blocks. In particular, by Proposition 2.6.1, we know that ${\left(M\left(h+{r}_{j}{s}_{j},c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}$ has a filtration by Verma modules $0={M}_{0}\subseteq {M}_{1}\subseteq \cdots$ such that • ${\left(M\left(h+{r}_{j}{s}_{j},c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}=\bigcup {M}_{i}\text{;}$ • ${M}_{i}/{M}_{i-1}\cong M{\left(h+{r}_{j}{s}_{j}+h\prime +{k}_{j,i},c+c\prime \right)}^{\oplus \text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +{k}_{j,i},c\prime \right)}}$ for each ${k}_{j,i}\in {ℤ}_{\ge 0}$ such that $\left(h+{r}_{j}{s}_{j}+h\prime +{k}_{j,i},c+c\prime \right)\in \left[\mu \right]\text{.}$ This means that $dim((M(h,c)j⊗L(h′,c′))[μ])(h+h′+n,c+c′)= ∑kj,ip(n-(rjsj+kj,i)) dim L(h′,c′)(h′+kj,i,c′),$ where we sum over $\left\{{k}_{j,i} | \left(h+{r}_{j}{s}_{j}+h\prime +{k}_{j,i},c+c\prime \right)\in \left[\mu \right]\right\}\text{.}$ Then, $\sum _{j}\text{dim}{\left({\left(M{\left(h,c\right)}_{j}\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ is given by $∑j∑kj,i|(h+rjsj+h′+kj,i,c+c′)∈[μ] p(n-(rjsj+kj,i)) dim L(h′,c′)(h′+kj,i,c′). (3.22)$ On the other hand, $∏0≤k≤n(h+h′+k,c+c′)∈[μ] (ak(h′,c′)(h+t,c))p(n-k)$ is $ord ( ∏0≤k≤n(h+h′+k,c+c′)∈[μ] ∏1≤r≤srs≤k (𝒞r,s(h+t,c)𝒞r,s(h+t+h′+k-rs,c+c′)) dim L(h′,c′)(h′+k-rs,c′)p(n-k) ) . (3.23)$ Given the correspondence stated earlier, we see that (3.23) is $∑0≤k≤n(h+h′+k,c+c′)∈[μ] ∑j|rjsj which is equal to (3.22). Case (iv): We have $M(h,c)j=M (h+nj,1,c)+ M(h+nj,2,c) (3.24)$ where $M(h+nj,1,c)∩ M(h+nj,2,c)= M(h,c)j+1. (3.25)$ Consider ${n}_{{j}_{0},i}$ maximal so that ${n}_{{j}_{0},i}\le n\text{.}$ Then, $(M(h,c)j0⊗L(h′,c′))(h+h′+n,c+c′) = (M(h+nj0,1,c)⊗L(h′,c′))(h+h′+n,c+c′) = ⊕ (M(h+nj0,2,c)⊗L(h′,c′))(h+h′+n,c+c′).$ Again, we know the decomposition of each of these summands by blocks. The module ${\left(M\left(h+{n}_{{j}_{0},i},c\right)\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}$ has a filtration by Verma modules where $M\left(h+{n}_{{j}_{0},i}+{k}_{{j}_{0},i,l},c+c\prime \right)$ appears with multiplicity $\text{dim} L{\left(h\prime ,c\prime \right)}^{\left(h\prime +{k}_{{j}_{0},i,l},c\prime \right)}$ for each ${k}_{{j}_{0},i,l}$ such that $\left(h+{n}_{{j}_{0},i}+{k}_{{j}_{0},i,l},c+c\prime \right)\in \left[\mu \right]\text{.}$ Therefore, $\text{dim}{\left({\left(M{\left(h,c\right)}_{{j}_{0}}\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ is $= dim((M(h+nj0,1,c)⊗L(h′,c′))[μ])(h+h′+n,c+c′) + dim((M(h+nj0,2,c)⊗L(h′,c′))[μ])(h+h′+n,c+c′) = ∑kj0,i,ldim L(h′,c′)(h′+kj0,i,l,c′) p(n-(nj0,i+kj0,i,l)).$ Using (3.24) and (3.25), we can similarly argue that $\text{dim}{\left({\left(M{\left(h,c\right)}_{{j}_{0}-1}\otimes L\left(h\prime ,c\prime \right)\right)}^{\left[\mu \right]}\right)}^{\left(h+h\prime +n,c+c\prime \right)}$ is $= dim((M(h+nj0-1,1,c)⊗L(h′,c′))[μ])(h+h′+n,c+c′) +dim((M(h+nj0-1,2,c)⊗L(h′,c′))[μ])(h+h′+n,c+c′) -dim((M(h+,c)j0⊗L(h′,c′))[μ])(h+h′+n,c+c′) = ∑kj0,i,l dim L(h′,c′)(h′+kj0-1,i,l,c′) p(n-(nj0-1,i+kj0-1,i,l)) -∑kj0-1,i,l dim L(h′,c′)(h′+kj0,i,l,c′) p(n-(nj0,i+kj0,i,l)).$ In general, $dim((M(h,c)j0-m⊗L(h′,c′))[μ])(h+h′+n,c+c′)$ is given by $∑s=0m(-1)m-s ∑kj0-s,i,l dim L(h′,c′)(h′+kj0-s,i,l,c′) p(n-(nj0-s,i+kj0+s,i,l)).$ Suppose that ${n}_{j,i}\le n$ for $j\le m$ and ${n}_{j,i}>n$ for $j>m\text{.}$ (It may be the case that ${n}_{j,i}\le n$ and ${n}_{j,2}>n\text{.}$ However, the same argument works with only minor modifications.) Then, $∑j∈ℤ>0dim ((M(h,c)j⊗L(h′,c′))[μ])(h+h′+n,c+c′)$ is $∑s=0⌊m-12⌋ ∑k2s+1,i,l dim L(h′,c′)(h′+k2s+1,i,l,c′) p(n-(n2s+1,i+k2s+1,i,l)) (3.26)$ Again, the distinct zeros in $\text{det} M{\left(h,c\right)}^{\left(h+m,c\right)}$ will be exactly of the form ${𝒞}_{{r}_{j,i},{s}_{j,i}}\left(h,c\right),$ where $j=2s+1,$ $0\le s\le ⌊\frac{m-1}{2}⌋,$ and ${r}_{j,i}{s}_{j,i}={n}_{j,i}\text{.}$ Moreover, the multiplicity of the zero ${𝒞}_{{r}_{j,i},{s}_{j,i}}\left(h,c\right)$ in $\text{det} M{\left(h,c\right)}^{\left(h+m,c\right)}$ is $p\left(m-{n}_{j,i}\right)=\text{dim} M{\left(h+{n}_{j,i},c\right)}^{\left(h+m,c\right)}\text{.}$ We then see that $ord ( ∏0≤k≤n(h+h′+k,c+c′)∈[μ] (ak(h′,c′)(h+t,c))p(n-k) )$ is $∑0≤k≤n(h+h′+k,c+c′)∈[μ] ∑j,i|nj,i which is equal to (3.26). $\square$ ## Notes and References This is an excerpt from the PhD thesis Translation Functors and the Shapovalov Determinant by Emilie Wiesner, University of Wisconsin-Madison, 2005.
2023-03-23 01:15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1160, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558811187744141, "perplexity": 166.61484026117574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00692.warc.gz"}
https://tex.stackexchange.com/questions/165249/how-to-put-words-in-several-lines-in-one-cell
# how to put words in several lines in one cell? The table is too wide, and I want to make it narrower by wrapping the Second under the First, how should I achieve this? The latex code looks like this currently: \begin{table*}[htbp] \centering \begin{tabular}{c|ccccc} \hline \textbf{\#} & \textbf{Violations} & \textbf{First Second} & \textbf{First Second} & \textbf{First Second} & \textbf{First Second} \bigstrut\\ \hline 1 & 0 & 91 & 101 & 507 & 1973.54 \bigstrut[t]\\ 2 & 0 & 102 & 92 & 472 & 1874.65 \\ 3 & 0 & 104 & 92 & 459 & 1856.21 \\ 4 & 0 & 108 & 100 & 407 & 1790.56 \\ 5 & 0 & 112 & 77 & 511 & 1723.66 \\ $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \\ \hline \end{tabular}% \end{table*}% • @Werner the "violations" needed to be center at the row, is it possible? – sweetyBaby Mar 13 '14 at 3:55 • – Werner Mar 13 '14 at 3:55 You can use the minimalistic makecell package that could adjust the alignment of a specific cell: \documentclass{article} \usepackage{makecell}% http://ctan.org/pkg/makecell \begin{document} \begin{table}[ht] \centering \begin{tabular}{c|ccccc} \hline \textbf{\#} & \textbf{Violations} & \bfseries\makecell[c]{First \\ Second} & \bfseries\makecell[c]{First \\ Second} & \bfseries\makecell[c]{First \\ Second} & \bfseries\makecell[c]{First \\ Second} \\ \hline 5 & 0 & 112 & 77 & 511 & 1723.66 \\ $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \\ \hline \end{tabular}% \end{table}% \end{document} • You could make the code even shorter using the \theadcommand and declaring \renewcommand{\theadfont}{\bfseries} in the preamble. – Bernard Mar 13 '14 at 16:29 Simplest way would be to just use two lines. To place the single line heading in the center of the row you can use \multirow from the multirow package: ## Code: \documentclass{article} \usepackage{multirow} \begin{document} \begin{table*}[htbp] \centering \begin{tabular}{c|ccccc} \hline \multirow{2}{*}{\textbf{\#}} & \multirow{2}{*}{\textbf{Violations}} & \textbf{First} & \textbf{First} & \textbf{First } & \textbf{First } \\ & & \textbf{Second} & \textbf{Second} & \textbf{Second} & \textbf{Second} \\ \hline 5 & 0 & 112 & 77 & 511 & 1723.66 \\ $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \\ \hline \end{tabular}%
2020-01-21 13:52:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991518259048462, "perplexity": 775.5122018812184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00131.warc.gz"}
https://www.encyclopediaofmath.org/index.php/Affine_tensor
# Affine tensor An element of the tensor product of $p$ copies of an $n$-dimensional vector space $E$ and $q$ copies of the dual vector space $E^*$. Such a tensor is said to be of type $(p,q)$, the number $p+q$ defining the valency, or degree, of the tensor. Having chosen a basis $\{e_i\}$ in $E$, one defines an affine tensor of type $(p,q)$ with the aid of $n^{p+q}$ components $T^{i_1\ldots i_p}_{j_1\ldots j_p}$ which transform as a result of a change of basis $e'_i = A_i^s e_s$ according to the formula $$T'^{i_1\ldots i_p}_{j_1\ldots j_p} = A'^{i_1}_{s_1} \cdots A'^{i_p}_{s_p} A^{t_1}_{j_1} \cdots A^{t_q}_{j_q} T^{i_1\ldots i_p}_{j_1\ldots j_p}$$ where $A^s_j A'^i_s = \delta^i_j$. It is usually said that the tensor components undergo a contravariant transformation with respect to the upper indices, and a covariant transformation with respect to the lower. ## Contents An affine tensor as described above is commonly called simply a tensor. #### References [a1] B.A. Dubrovin, A.T. Fomenko, S.P. Novikov, "Modern geometry - methods and applications" , Springer (1984) (Translated from Russian) [a2] W.H. Greub, "Multilinear algebra" , Springer (1967) [a3] C.T.J. Dodson, T. Poston, "Tensor geometry" , Pitman (1977) Zbl 0369.53012 Graduate Texts in Mathematics 130 (2nd ed.) Springer (1991) ISBN 3-540-52018-X Zbl 0732.53002 The tensor $\delta^i_j$ is the Kronecker delta tensor. An isotropic tensor is one for which the components are unchanged under change of basis. The Kronecker delta tensor is isotropic; in dimension $n=3$ the discriminant tensor $\epsilon_{ijk}$ defined by $\epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1$, $\epsilon_{321} = \epsilon_{213} = \epsilon_{132} = -1$, all other values zero, of order 3, is isotropic.
2016-07-27 15:20:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982550323009491, "perplexity": 519.1887573575386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00217-ip-10-185-27-174.ec2.internal.warc.gz"}
https://gmatclub.com/forum/if-a-and-b-are-positive-is-a-1-b-1-1-less-than-a-1-b-106509.html
It is currently 24 Nov 2017, 06:22 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b Author Message TAGS: ### Hide Tags Senior Manager Joined: 28 Aug 2010 Posts: 260 Kudos [?]: 793 [1], given: 11 If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b [#permalink] ### Show Tags 18 Dec 2010, 14:01 1 KUDOS 2 This post was BOOKMARKED 00:00 Difficulty: 65% (hard) Question Stats: 59% (01:08) correct 41% (01:31) wrong based on 197 sessions ### HideShow timer Statistics If a and b are positive, is is $$(a^{-1}+b^{-1})^{-1}$$ less than $$(a^{-1}*b^{-1})^{-1}$$? (1) a = 2b (2) a + b > 1 [Reveal] Spoiler: OA _________________ Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html ------------------------------------------------------------------------------------------------- Ajit Last edited by Bunuel on 08 Oct 2017, 08:46, edited 1 time in total. Renamed the topic and edited the question. Kudos [?]: 793 [1], given: 11 Math Expert Joined: 02 Sep 2009 Posts: 42356 Kudos [?]: 133204 [3], given: 12439 Re: If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b [#permalink] ### Show Tags 18 Dec 2010, 14:16 3 KUDOS Expert's post 3 This post was BOOKMARKED ajit257 wrote: If a and b are positive, is (a-1 + b-1)-1 less than (a-1b-1)-1? (1) a = 2b (2) a + b > 1 Question: is $$(a^{-1}+b^{-1})^{-1}<(a^{-1}*b^{-1})^{-1}$$? --> $$(\frac{1}{a}+\frac{1}{b})^{-1}<(\frac{1}{ab})^{-1}$$ --> $$\frac{ab}{a+b}<ab$$, as $$a$$ and $$b$$ are positive we can reduce by $$ab$$ and finally question becomes: is $$a+b>1$$? (1) a = 2b --> is $$3b>1$$ --> is $$b>\frac{1}{3}$$, we don't know that, hence this statement is not sufficient. (2) a + b > 1, directly gives an answer. Sufficient. P.S. ajit257 you should type the question so that it's clear which is an exponent, which is subtraction, and so on. _________________ Kudos [?]: 133204 [3], given: 12439 Current Student Joined: 21 Oct 2013 Posts: 193 Kudos [?]: 46 [1], given: 19 Location: Germany GMAT 1: 660 Q45 V36 GPA: 3.51 Re: If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b [#permalink] ### Show Tags 12 Dec 2013, 05:10 1 KUDOS Bunuel wrote: ajit257 wrote: If a and b are positive, is (a-1 + b-1)-1 less than (a-1b-1)-1? (1) a = 2b (2) a + b > 1 Question: is $$(a^{-1}+b^{-1})^{-1}<(a^{-1}*b^{-1})^{-1}$$? --> $$(\frac{1}{a}+\frac{1}{b})^{-1}<(\frac{1}{ab})^{-1}$$ --> $$\frac{ab}{a+b}<ab$$, as $$a$$ and $$b$$ are positive we can reduce by $$ab$$ and finally question becomes: is $$a+b>1$$? (1) a = 2b --> is $$3b>1$$ --> is $$b>\frac{1}{3}$$, we don't know that, hence this statement is not sufficient. (2) a + b > 1, directly gives an answer. Sufficient. P.S. ajit257 you should type the question so that it's clear which is an exponent, which is subtraction, and so on. Hey Bunuel, once again, this is a little bit fast for me. I follow your first and third step to reduce the question, but I don't get the second. I'd explained myself that $$(\frac{1}{ab})^{-1}$$ = $$1*(\frac{ab}{1})$$ so we have ab on the right side. But I don't follow what you did do reduce the left side. Could you explain in detail? Thank you! Kudos [?]: 46 [1], given: 19 Math Expert Joined: 02 Sep 2009 Posts: 42356 Kudos [?]: 133204 [0], given: 12439 Re: If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b [#permalink] ### Show Tags 12 Dec 2013, 05:14 unceldolan wrote: Bunuel wrote: ajit257 wrote: If a and b are positive, is (a-1 + b-1)-1 less than (a-1b-1)-1? (1) a = 2b (2) a + b > 1 Question: is $$(a^{-1}+b^{-1})^{-1}<(a^{-1}*b^{-1})^{-1}$$? --> $$(\frac{1}{a}+\frac{1}{b})^{-1}<(\frac{1}{ab})^{-1}$$ --> $$\frac{ab}{a+b}<ab$$, as $$a$$ and $$b$$ are positive we can reduce by $$ab$$ and finally question becomes: is $$a+b>1$$? (1) a = 2b --> is $$3b>1$$ --> is $$b>\frac{1}{3}$$, we don't know that, hence this statement is not sufficient. (2) a + b > 1, directly gives an answer. Sufficient. P.S. ajit257 you should type the question so that it's clear which is an exponent, which is subtraction, and so on. Hey Bunuel, once again, this is a little bit fast for me. I follow your first and third step to reduce the question, but I don't get the second. I'd explained myself that $$(\frac{1}{ab})^{-1}$$ = $$1*(\frac{ab}{1})$$ so we have ab on the right side. But I don't follow what you did do reduce the left side. Could you explain in detail? Thank you! Sure. $$(\frac{1}{a}+\frac{1}{b})^{-1}$$; $$(\frac{b+a}{ab})^{-1}$$; $$\frac{ab}{b+a}$$. Does this make sense? _________________ Kudos [?]: 133204 [0], given: 12439 Current Student Joined: 21 Oct 2013 Posts: 193 Kudos [?]: 46 [0], given: 19 Location: Germany GMAT 1: 660 Q45 V36 GPA: 3.51 Re: If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b [#permalink] ### Show Tags 12 Dec 2013, 06:29 Bunuel wrote: Sure. $$(\frac{1}{a}+\frac{1}{b})^{-1}$$; $$(\frac{b+a}{ab})^{-1}$$; $$\frac{ab}{b+a}$$. Does this make sense? Yeah, now I see it. Guess my head was just overloaded with math --> it's really clear now! Thanks! Kudos [?]: 46 [0], given: 19 Non-Human User Joined: 09 Sep 2013 Posts: 15499 Kudos [?]: 283 [0], given: 0 Re: If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b [#permalink] ### Show Tags 08 Oct 2017, 05:41 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Kudos [?]: 283 [0], given: 0 Re: If a and b are positive, is (a^(-1) + b^(-1))^(-1) less than (a^(-1)*b   [#permalink] 08 Oct 2017, 05:41 Display posts from previous: Sort by
2017-11-24 13:22:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6824184656143188, "perplexity": 3833.0177104658587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808133.70/warc/CC-MAIN-20171124123222-20171124143222-00002.warc.gz"}
https://interplayoflight.wordpress.com/2018/09/04/hybrid-raytraced-shadows-part-2-performance-improvements/?replytocom=4965
# Hybrid raytraced shadows part 2: performance improvements A few weeks ago I documented the experiments I made with hybrid raytraced shadows and reflections, describing how raytracing can be set up and used in the context of a deferred rendering architecture. It was great fun and I managed to produce some nice images. I soon came to realise though that this simplistic approach was mostly suitable for simple models (such as spheres and cubes) as the bounding volume hierarchy (BVH) I created to accelerate scene traversal stored full meshes in the leaves. This reduced the opportunity to accelerate traversal further when a leaf was reached, which is especially bad for large meshes, and complicated the shader a lot by creating many paths through it, potentially increasing thread divergence and reducing occupancy (by increased register allocation). Also the raytracing pass was heavily memory bound, making it scale less well with more complex, and higher polygon, content. The current approach would easily break down when used with more representative game environments/meshes. So I set about improving the performance of the hybrid raytracer. This coincided with Wolfgang Engel inviting me to port the code I wrote for the previous blog post to The Forge, so I switched to that framework to develop the test app. The first improvement I implemented was to store triangles in the leaves instead of full meshes. For every triangle in the scene, transformed to world space, a bounding box is calculated and the BVH was created off that bounding box list, initially using a median split (sorting the bounding box list by distance and splitting it in half), like previously. For storage of BVH tree I initially kept the Structured Buffer from the previous demo, adding an extra float3 and repurposing the members of the struct for either an intermediate node’s bounding box or a leaf’s triangle vertices. The compute shader became much simpler as there is no need for two separate parts, one to traverse the BVH and one to iterate over (potentially many) mesh triangles to determine collisions. This simple change led to more than halving the raytracing shadow pass cost even in a simple scene (10ms from 23ms on the Intel HD 4000). This was due to a combination of less memory bandwidth needed and less ALU for collision detection as the vertices come pre-transformed to world space (so no need to fetch matrices and do transformations), and reduced divergence in the shader. Also by by having individual triangles in the leaves we reduce the number of ray-triangle intersection tests and increase the number of ray-aabox test which are cheaper. Next I tried loading and creating the BVH using the Sponza model. This created a ~558K node tree which at first proved too much for the HD 4000 to handle, for quarter resolution shadows. On an NVidia GTX 970 (Maxwell) the shadow pass cost was 16ms, still quarter resolution (full res was 1920×1080). Having a single structure for both the nodes and the leaves seems wasteful, so I swapped to a raw (ByteAddressBuffer) for BVH storage to handle memory in the shader manually. ```while (offsetToNextNode !=0) { dataOffset += SizeOfInt; collision = false; if (offsetToNextNode < 0) { //try collision against this node's bounding box dataOffset += SizeOfFloat3; dataOffset += SizeOfFloat3; //intermediate node check for intersection with bounding box collision = RayIntersectsBox(worldPos, rayDirInv, bboxMin.xyz, bboxMax.xyz); //if there is collision, go to the next node (left) or else skip over the whole branch if (!collision) dataOffset += abs(offsetToNextNode); } else if (offsetToNextNode > 0) { dataOffset += SizeOfFloat3; dataOffset += SizeOfFloat3; dataOffset += SizeOfFloat3; //leaf node check for intersection with triangle collision = RayTriangleIntersect(worldPos, rayDir, vertex0.xyz, vertex1MinusVertex0.xyz, vertex2MinusVertex0.xyz, t, bCoord); if (collision) { break; } } } ``` Using a ByteAddressBuffer and variable size nodes dropped the shadow cost from 16ms to 13ms on the GTX 970. Memory layout changes helped performance measurably, but 13ms for quarter resolution shadows on the GTX 970 sound a lot. The issue with “real-world” meshes (as opposed to spheres and cubes), is that the polygon distribution is not uniform and triangle sizes vary a lot leading to non-optimal traversal often with overlapping nodes (bounding boxes). To improve this I tried a Surface Area Heuristic (SAH) during BVH construction. To understand how this works imagine a bunch of triangles and a ray: In order to accelerate traversal, a median-split BVH construction approach would sort triangles (across an axis) based on their bounding box centroid, split the list in the middle and create two children nodes. With this scheme the ray will first intersect the left child’s bounding box, which is mostly empty space, and will waste time going down the left subtree only to find out that there is no triangle collision there. If we remove the “split in the middle” requirement and allow the split distance to move, we can potentially find a better split that eliminates that empty space and makes BVH traversal faster (in this example we’ve moved the split distance one to the left): This is exactly what the Surface Area Heuristic does, it considers many splits across all 3 axes: and picks the split distance that minimises $SurfaceArea(AABB_{left}) * NoOfTris_{left} + SurfaceArea(AABB_{right}) * NoOfTris_{right}$ This is the traversal heatmap of the plain, median split BVH. Blue is zero steps through the BVH and red is over 500 : This is the traversal heatmap of the BVH with the Surface Area Heuristic: The number of nodes visited during traversal goes down dramatically, as does the cost of the shadows pass, down to 3.6ms (from 13ms) on the GTX 970, for quarter res shadows. A tip from Yuriy O’Donnell to rearrange the nodes so that the largest one is visited first, dropped this a further 0.3 ms. The SAH and calculating the cost of each split gives us further options, for example we could stop subdivision when splitting a node produces children where the cost is no less than the cost of the parent node. I didn’t implement this in this iteration as it requires leaf nodes potentially handling more than one triangles. At this point I started feeling encouraged and dropped the quarter resolution, trying full resolution shadows instead. This, as expected, increased the cost of the shadows pass to ~10.5ms on the GTX 970. The shadow pass was still TEX bound, the cost of reading the BVH trampling everything else in the shader. So far I was using a mix of unaligned Load and Load3s to fetch the BVH data in the compute shader from the ByteAddressBuffer. A performance study of various buffers done by Sebastian Aaltonen showed that raw (ByteAddressBuffer) buffers might not be the best option on NVidia GPUs (certainly on Maxwell GPUs like the one I am using — Sebastian has just added performance test results for Volta which show that raw might be the best choice on that architecture). I experimented with changing the storage of the BVH to a float4 typed buffer instead of raw, packing the data in order to reduce memory waste like this: ```struct BVHNode { float4 MinBounds; // OffsetToNextNode in w component float4 MaxBounds; }; struct BVHLeaf { float4 Vertex0; // OffsetToNextNode in w component float4 Edge1; float4 Edge2; }; ``` ```while (offsetToNextNode != 0) { float4 element0 = BVHTree[dataOffset++].xyzw; float4 element1 = BVHTree[dataOffset++].xyzw; offsetToNextNode = int(element0.w); collision = false; if (offsetToNextNode < 0) { //try collision against this node's bounding box float3 bboxMin = element0.xyz; float3 bboxMax = element1.xyz; //intermediate node check for intersection with bounding box collision = RayIntersectsBox(worldPos, rayDirInv, bboxMin.xyz, bboxMax.xyz); //if there is collision, go to the next node (left) or else skip over the whole branch if (!collision) dataOffset += abs(offsetToNextNode); } else if (offsetToNextNode > 0) { float4 element2 = BVHTree[dataOffset++].xyzw; float3 vertex0 = element0.xyz; float3 vertex1MinusVertex0 = element1.xyz; float3 vertex2MinusVertex0 = element2.xyz; //leaf node check for intersection with triangle collision = RayTriangleIntersect(worldPos, rayDir, vertex0.xyz, vertex1MinusVertex0.xyz, vertex2MinusVertex0.xyz, t, bCoord); if (collision) { break; } } } ``` Compared to raw buffer storage this scheme added a byte per node and 2 bytes per leaf. In terms of performance though, it made a massive difference, cutting the shadow cost in half on the GTX 970, down to 5.5ms from 10.5ms. Unfortunately on Intel GPUs, at least the HD 4000 that I am profiling on, the situation is reversed, i.e. raw buffers are significantly faster than typed buffers, meaning for optimal performance on all GPUs we’d need support both types of buffers. Finally, Rory Driscoll suggested a simple optimisation that I somehow had totally missed, although I have used it in the previous game for regular, shadowmap, shadows: avoid casting rays for surfaces that point away from the light. This is simple to implement in the shader as I am already loading the normal. A negative dot product between the light and normal directions means that the surface is pointing away from the light and ray tracing can be skipped for that pixel. ```float depth = depthBuffer[DTid.xy].x; float3 normal = normalBuffer[DTid.xy].xyz; float NdotL = dot(normal, lightDir.xyz); if (depth < 1 && NdotL > 0) { // trace rays } ``` The improvement this change can make to the shadow cost depends on the scene and light direction, in this particular case it cut the cost in half again, down to ~2.5ms from 5.5ms on the GTX 970 and to 20ms from 40ms on the HD 4000 (full-res shadows in both cases). As a reminder we also avoid tracing rays for “sky pixels” (depth == 1). Interestingly, the shadow pass is now ALU bound instead of TEX (memory) bound on GTX970, meaning that the GPU spends more time on calculating intersections than fetching memory. Compared to the original (which was rendering quarter res as well) This could potentially be improved by reducing bounding box overlap in the BVH which would in turn avoid unnecessary intersection tests. Spatial Splits seems like a good approach which I will try in a future iteration. Also in some cases, especially for simpler systems like directional light shadows and non moving light, parts of the ray-box intersection could be precalculated and stored in the BVH nodes to reduce the amount of computation in the shader. To summarise the improvements that took place this iteration and their impact: • Stored triangles instead of full meshes in the BVH leaves. Halved the raytracing cost of simple scene on the HD 4000. • Switched to a ByteAddressBuffer from Structured Buffer for BVH storage, this had a measurable impact on performance on the GTX970 (from 16ms to 13ms, quarter res shadows) and even more on the HD 4000 • Used a Surface Area Heuristic during BVH creation, this improved traversal time a lot (3.6ms from 13ms on the GTX 970, for quarter res shadows) • Reordered nodes so that the one with the largest probability of impact is visited first, this had a small positive impact on the performance (0.3 ms on the GTX970). • Switched to a float4 typed Buffer for BVH storate which improved performance significantly on the GTX970 (down to 5.5ms from 10.5ms for full res shadows), but made performance worse on the HD 4000. • Avoided casting shadow rays for surfaces that point away from the light. Again, this halved the raytracing cost on both GPUs (down to 2.5ms from 5.5ms on the GTX 970 and to 20ms from 40ms on the HD 4000, full res shadows). After this batch of improvements full resolution shadows now cost about ~2.5ms (for the profiling light direction/view) at 1920×1080 on the GTX 970 (or 0.82 GigaRays/sec if you prefer that metric), and about 20ms on the Intel HD 4000. Finally, since I unhelpfully changed shadow resolution mid-project, it is worth comparing the impact of the performance improvements when rendering quarter resolution shadows, for which I have recorded the original cost: It is reduced from about 16ms to 0.8ms on the GTX970. The results imply that raytraced shadows could be feasible in “real-world” scenarios, with more representative meshes, using a combination of good BVH acceleration, playing to a particular GPU’s strength and context specific optimisations (like the NdotL optimisation for shadow rays above). The hybrid raytraced shadows sample is now part of The Forge and it supports both DirectX 12 and Vulkan (more platforms are being added as well). It will be available with the next release which should be able to download this week. ## 3 thoughts on “Hybrid raytraced shadows part 2: performance improvements” 1. Thanks for the write-up Kostas. Is the 20ms figure you mention for the HD4000 measured using the optimal memory layout for that GPU? 1. Yes it is, the HD4000 is using a ByteAddressBuffer to store the BVH. 2. nothings2 says: I don’t have a ton of actual experience with (compute) shader optimization, so take this with a grain of salt, but: If you have a code sequence like “if (usually_true) do_cheap_thing; else do_expensive_thing” then this can be fine if usually true is true like 99.9% of the time. (It’s also fine if “usually_true” is true like 50% of the time.) But if “usually_true” is true like 1/64th of the time, and the underlying SIMD is 64 lanes wide, then you’re going to be running the else case almost every time through the loop, but with only 1/64th utilization. (To be clear, I’m referring to your if branch on testing BVH nodes vs testing triangles.) I have no idea what sort of actual numbers you’re getting here so I don’t know if it’s an actual issue. I don’t remember if you can easily use perf tools to see utilization, but it seems like there’s a simple way to optimize it and you can just test it and see if it helps. (Again, caveat, I’ve never actually done this so maybe I’m missing something.) To be explicit, the loop looks like: “while () { setup; if (usually) { cheap_thing } else { expensive_thing }” (I’m writing this as a one-liner because there’s no preview so I don’t know how indented code will show up.) What you want to do is do more of the cheap thing than the expensive thing, so you basically unroll the cheap half of the loop: “setup; while() { if (usually) { cheap_thing; setup; if (usually) { cheap_thing; setup; } } } if (!usually) { expensive_thing; setup; } }” I don’t have any practical experience so I don’t know how many times you should unroll. I’d start with four and see if it makes any difference.
2019-05-21 18:27:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42595183849334717, "perplexity": 2546.8812298052494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00488.warc.gz"}
https://math.stackexchange.com/questions/3196799/show-that-the-five-digit-number-abcde-is-congruent-mod11-to-a-c-e-b
# Show that the five digit number abcde is congruent (mod11) to $(a + c + e) - (b + d)$ [closed] Show that the five digit number $$abcde$$ is congruent (mod $$11$$) to $$(a + c + e) - (b + d)$$ ## closed as off-topic by Martin R, N. F. Taussig, Javi, José Carlos Santos, LeucippusApr 22 at 15:04 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Martin R, N. F. Taussig, Javi, José Carlos Santos, Leucippus If this question can be reworded to fit the rules in the help center, please edit the question. • Observe that $$abcde = 10000 \times a + 1000 \times b + 100 \times c + 10 \times d + e.$$ – Dbchatto67 Apr 22 at 7:47 ## 2 Answers Hint: Write your number in the form $$e+10d+10^2c+10^3b+a10^4$$ and note that $$10\equiv -1\mod 11$$ $$10^2\equiv 1 \mod 11$$ $$10^3\equiv -1\mod 11$$ $$10^4\equiv 1 \mod 11$$ Observe that $$abcde = 10000 \times a + 1000 \times b + 100 \times c + 10 \times d + e.$$ So \begin{align*} abcde - (a+c+e) + (b+d) & \equiv 9999 \times a + 1001 \times b + 99 \times c + 11 \times d\ (\text {mod}\ 11) \\ & \equiv 0\ (\text {mod}\ 11) \end{align*}
2019-05-20 04:58:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988910555839539, "perplexity": 1547.0128421547947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255562.23/warc/CC-MAIN-20190520041753-20190520063753-00329.warc.gz"}
https://www.iacr.org/cryptodb/data/author.php?authorkey=726
## CryptoDB ### Christophe Giraud #### Publications Year Venue Title 2015 EPRINT 2013 PKC 2010 EPRINT At CHES 2003, Piret and Quisquater published a very efficient DFA on AES which has served as a basis for many variants published afterwards. In this paper, we revisit P&Q's DFA on AES and we explain how this attack can be much more efficient than originally claimed. In particular, we show that only 2 (resp. 3) faulty ciphertexts allow an attacker to efficiently recover the key in the case of AES-192 (resp. AES-256). Our attack on AES-256 is the most efficient attack on this key length published so far. 2009 EPRINT Since their publication in 1996, Fault Attacks have been widely studied from both theoretical and practical points of view and most of cryptographic systems have been shown vulnerable to this kind of attacks. Until recently, most of the theoretical fault attacks and countermeasures used a fault model which assumes that the attacker is able to disturb the execution of a cryptographic algorithm only once. However, this approach seems too restrictive since the publication in 2007 of the successful experiment of an attack based on the injection of two faults, namely a second-order fault attack. Amongst the few papers dealing with second-order fault analysis, three countermeasures were published at WISTP'07 and FDTC'07 to protect the RSA cryptosystem using the CRT mode. In this paper, we analyse the security of these countermeasures with respect to the second-order fault model considered by their authors. We show that these countermeasures are not intrinsically resistant and we propose a new method allowing us to implement a CRT-RSA that resists to this kind of second-order fault attack. 2008 CHES 2006 CHES 2005 EPRINT Since the publication of Differential Power Analysis (DPA) in 1998, many countermeasures have been published to counteract this very efficient kind of attacks. All these countermeasures follow the same approach : they try to make sensitive operations uncorrelated with the input. Such a method is very costly in terms of both timing and memory space. In this paper, we suggest a new approach where block ciphers are designed to inherently thwart DPA attacks. The idea we develop in this paper is based on a theoretical analysis of DPA attacks and it essentially consists in embedding existing iterated block ciphers in a secure layer. We analyse the security of our proposal and we show that it induces very small overheads. 2003 EPRINT In this paper we describe two different DFA attacks on the AES. The first one uses a fault model that induces a fault on only one bit of an intermediate result, hence allowing us to obtain the key by using 50 faulty ciphertexts for an AES-128. The second attack uses a more realistic fault model: we assume that we may induce a fault on a whole byte. For an AES-128, this second attack provides the key by using less than 250 faulty ciphertexts. Moreover, this attack has been successfully put into practice on a smart card. 2002 EPRINT For speeding up elliptic curve scalar multiplication and making it secure against side-channel attacks such as timing or power analysis, various methods have been proposed using specifically chosen elliptic curves. We show that both goals can be achieved simultaneously even for conventional elliptic curves over $\mathbb{F}_p$. This result is shown via two facts. First, we recall the known fact that every elliptic curve over $\mathbb{F}_p$ admits a scalar multiplication via a (Montgomery ladder) Lucas chain. As such chains are known to be resistant against timing- and simple power/electromagnetic radiation analysis attacks, the security of our scalar multiplication against timing and simple power/electromagnetic radiation analysis follows. Second, we show how to parallelize the 19 multiplications within the resulting \lq\lq double" and \lq\lq add" formulas of the Lucas chain for the scalar multiplication. This parallelism together with the Lucas chain results in 10 parallel field multiplications per bit of the scalar. Finally, we also report on a concrete successful implementation of the above mentioned scalar multiplication algorithm on a very recently developed and commercially available coprocessor for smart cards. 2001 CHES CHES 2017 CHES 2016 CHES 2012 CHES 2009
2019-09-15 06:32:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5120064616203308, "perplexity": 879.0535190454974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00552.warc.gz"}
http://math.stackexchange.com/questions/47761/fermats-last-theorem-implications-there-is-no-new-proof
# Fermat's Last Theorem: implications (there is no new proof) I am not experienced in Number Theory but what I know is that some results of this filed are applicable in other areas, e.g. algebra. For sure FLT made (and makes) people be interested in Number Theory leading to the development of new methods which can be themselves applied not only for the proof of FLT (like financial problems motivated somehow development of stochastic analysis). I am interested - if there are applications or implications of FLT itself? More precisely: if the fact "for each $n\geq3$ there are no integer solutions of $a^n+b^n=c^n$" leads to solutions of problems which are not in the field of Number Theory? I would specify that I wonder about some problems which are already formulated: since FLT is known for more then 300 years I am pretty sure that there were formulated hypothesis which follow from FLT directly (if there are such hypothesis not in the field of Number Theory). -
2014-03-11 21:13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6359671354293823, "perplexity": 185.75797156941255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011278480/warc/CC-MAIN-20140305092118-00009-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathoverflow.net/feeds/question/99415
Is there any solution to this kind of equation? Or any clue to explore it? - MathOverflow [closed] most recent 30 from http://mathoverflow.net 2013-05-21T06:32:58Z http://mathoverflow.net/feeds/question/99415 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/99415/is-there-any-solution-to-this-kind-of-equation-or-any-clue-to-explore-it Is there any solution to this kind of equation? Or any clue to explore it? Zou Fangyu 2012-06-13T08:59:09Z 2012-06-13T09:29:24Z <blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="http://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx" rel="nofollow">How to solve f(f(x)) = cos(x) ?</a> </p> </blockquote> <p>$f(f(x))=sin x$</p> http://mathoverflow.net/questions/99415/is-there-any-solution-to-this-kind-of-equation-or-any-clue-to-explore-it/99417#99417 Answer by 4v4l0n42 for Is there any solution to this kind of equation? Or any clue to explore it? 4v4l0n42 2012-06-13T09:21:21Z 2012-06-13T09:21:21Z <p>I don't know if this helps you, but you may write it also as:</p> <p>$f(f(x)) = \frac{1}{2} ie^{-i x}-\frac{1}{2} i e^{ix}$</p> http://mathoverflow.net/questions/99415/is-there-any-solution-to-this-kind-of-equation-or-any-clue-to-explore-it/99419#99419 Answer by Moustafa for Is there any solution to this kind of equation? Or any clue to explore it? Moustafa 2012-06-13T09:29:24Z 2012-06-13T09:29:24Z <p>Answered here <a href="http://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx" rel="nofollow">http://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx</a> Or even more general here <a href="http://mathoverflow.net/questions/17614/solving-ffxgx" rel="nofollow">http://mathoverflow.net/questions/17614/solving-ffxgx</a></p>
2013-05-21 06:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34545430541038513, "perplexity": 6859.3992016528755}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699755211/warc/CC-MAIN-20130516102235-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathoverflow.net/revisions/24132/list
Post Closed as "no longer relevant" by Kevin Lin, quid, Qiaochu Yuan, Zev Chonoles, Gjergji Zaimi 2 added 42 characters in body It's a common observation in Lie theory that Cartan matrices and the Killing form are named after the wrong people; they were discovered by Killing and Cartan, respectively. I remember learning about many other examples of this phenomenon, but can't think of too many at the moment. Wikipedia has a short listsome examples here and here, but I'm curious about more obscure examples. So I thought I'd ask MO for a nice list. Bonus points for an interesting story behind why the concept was incorrectly named. Concepts that were deliberately named in honor of another mathematician don't count.
2013-05-24 12:40:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8154335618019104, "perplexity": 478.6619786293213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704658856/warc/CC-MAIN-20130516114418-00059-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.transtutors.com/questions/find-the-optimal-contract-length-when-the-marginal-cost-of-writing-a-contract-of-len-89730.htm
+1.617.933.5480 # Q: Find the optimal contract length when the marginal cost of writing a contract of length L 1. Suppose the marginal benefit of writing a contract is $50, independent of its length. Find the optimal contract length when the marginal cost of writing a contract of length L is: a. MC(L) = 10 + 2L. b. MC(L) = 5 + 2L. c. What happens to the optimal contract length when the marginal cost of writing a contract declines?2.Suppose the marginal cost of writing a contract of length L is MC(L) = 10 + 2L. Find the optimal contract length when the marginal benefit of writing a contract is: a. MB(L) = 100. b. MB(L) = 150. c. What happens to the optimal contract length when the marginal benefit of writing a contract increases? Want an Answer? Choose a Subject » Select Duration » Schedule a Session Related Questions in Others • Q: Optimal quantity of labor (Solved) April 14, 2013 Assume the price of output is p = 10 dollars, the marginal product of labor is given by the equation MPL= 20-( 1/2 ) L , and the price of labor is w L =... Solution Preview : 20 - (1/2)L, and the price of labor is wL = 100 dollars. Then, the optimal quantity of labor is a) 5... • Q: Assignment (Solved) February 11, 2012 could you do it please ! • Q: Suppose that Honda is on the verge of signing a 15-year contract with TRW... January 28, 2012 introduced a comparable airbag using a new technology that reduces the cost by 30 percent. How would this information affect Honda’s optimal contract length with... • Q: general equilibrium-pure exchange economy (Solved) April 07, 2013 I have a question regarding the demands that are calculated on page 85 , from the exercise starting in page 84. I don’t understand the calculations that have been done to reach the result... Solution Preview : Maximization problem is max v XAYA subject to the budget constraint Px XA + Py YA = Px + 2Py (XAYA) So the Lagrange is L= (XA *YA )^(1/2) + ? ( Px *XA+ Py * YA- Px-2* PY ) (* indicates... • Q: Economic Assignment (Solved) August 20, 2012 As attached. Solution Preview : Question 1. EverClean Services provides daily cleaning maintenance of toilets in food courts in Singapore. Dozens of firms provide similar service. The service is standardized; each company... ## Ask a New Question Copy and paste your question here... Have Files to Attach? Similar question? edit & resubmit » ## Transtutors Study Membership • Unlimited Access to 600,000+ textbook solutions and Classroom Assignments • 24/7 Study Help from experts • Plans start from$20/mo “8” Micro Economics experts Online
2014-11-27 08:11:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2656248211860657, "perplexity": 3986.523074153038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008215.58/warc/CC-MAIN-20141125155648-00053-ip-10-235-23-156.ec2.internal.warc.gz"}
https://pickedshares.com/en/engineering-mechanics-1-exercise-14-determine-bearing-reactions/
Start » Exercises » Engineering Mechanics I » Bearing reactions under sloping load # Bearing reactions under sloping load This exercise shows how to calculate the bearing reactions under sloping load. A bridge on a floating bearing and a fixed bearing is loaded by a braking car as shown. Determine the bearing reactions in A and B! ## Solution The following video is in german language. Please scroll down for the written solution. $\require{cancel}$ $\newcommand{\myvec}[1]{{\begin{pmatrix}#1\end{pmatrix}}}$ The equilibrium of forces in the x-direction is $\tag{1} \sum F_x = 0 = F \cdot \sin \alpha + F_{Bx}$ $\tag{2} F_{Bx} = -- F \cdot \sin \alpha$ The equilibrium of forces in the y-direction is $\tag{3} \sum F_y = 0 = F_{Ay} -- F \cdot \cos \alpha + F_{By}$ The equilibrium of moments around point A is $\tag{4} \sum M(A) = 0 = -F \cdot \cos \alpha \cdot a + F_{By} \cdot (a+b)$ $\tag{5} F_{By} = \frac{F \cdot \cos \alpha \cdot a}{a+b}$ From equation 3 follows $\tag{6} F_{Ay} = F \cdot \cos \alpha -- F_{By}$ $\tag{7} F_{Ay} = F \cdot \cos \alpha -- \frac{F \cdot \cos \alpha \cdot a}{a+b}$ $\tag{8} F_{Ay} = F \cdot \cos \alpha \cdot \left(1 -- \frac{a}{a+b} \right)$ This was the basic exercise “bearing reactions under sloping load”, used to show the appliance of fixed and floating bearings. Don’t miss the other interesting exercises!
2022-07-01 10:46:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5548607110977173, "perplexity": 4881.3642751165935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00565.warc.gz"}
https://www.eng-tips.com/viewthread.cfm?qid=457329
× INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # What are AutoCAD applications for batch drawing printing? ## What are AutoCAD applications for batch drawing printing? ### RE: What are AutoCAD applications for batch drawing printing? Is Batchplot not part of AutoCAD anymore? We used that on roadway project with around 350 sheets. Took some time getting it setup, but once you had the DWG added in the queue list it worked very nice. I think we setup various subset, like X-Section, Plan & Profile, … and had one version for 11x17 and one for full size plots. ### RE: What are AutoCAD applications for batch drawing printing? (OP) As far as I know the batch printing built in AutoCAD does not allow to recognize several formats (drawings) located in the model space in one file and automatically print according to the saved settings ### RE: What are AutoCAD applications for batch drawing printing? Why would someone keep "several formats (drawings) located in the model space in one file"? "For every expert there is an equal and opposite expert" Arthur C. Clarke Profiles of the future ### RE: What are AutoCAD applications for batch drawing printing? Have you tried the Autocad commend "publish"? You can pick and choose which layouts in which files to print in what order. It's pretty basic, prehaps even crude... For different size plots I print to pdf and then send the pdf to either the 11 x 17 printer or the 24 x 36 plotter. ### RE: What are AutoCAD applications for batch drawing printing? (OP) How do I use the Publish command to print multiple drawings of different formats (sizes) in Model space to different printers with a single click? For example, there are two documents in the first four A4 drawings in the second A1 drawing and A2 drawing. All these drawings need to be printed on different plotters, how to do it with one button? ### RE: What are AutoCAD applications for batch drawing printing? That is not how Model space is used. You can create as many Layouts as you want, configuring the plot settings for each layout as you choose. On each layout, you can show a portion of the contents of Model space using the MVIEW command. ### RE: What are AutoCAD applications for batch drawing printing? (OP) Our company does not use layout space at all when working in AutoCAD. This solution has its pros and cons. For example, you do not need to configure views for each drawing in the layout space, especially since there can be a lot of such drawings. And if you change the location of the drawing in the model space, you will need to configure it again in the layout space. ### RE: What are AutoCAD applications for batch drawing printing? What? You are wasting hours, just to save yourselves from trivial tasks that take minutes?? Do you know how to make templates? AutoCAD was given the ability to store numerous layouts nearly 20 years ago. I'm amazed that there are any hold-outs against using it today. I recommend that one or more of the people in your group or department learn basic AutoCAD drawing techniques from a proper course, and bring that knowledge back to your organization to "spread the news". Your are only a handful of commands away from having an efficient way to solve your problem. Commands like MVIEW, MS, PS, ZOOM, ZOOM\1XP, PAN, PLOT, PUBLISH and a few others, plus a few settings that are best accomplished within the PLOT and OPTIONS dialog boxes. ### RE: What are AutoCAD applications for batch drawing printing? (OP) I agree that we do not consciously use some AutoCAD features. Using only the space Model has both disadvantages and advantages. So instead of all the commands you listed, we use only one application, which I gave at the beginning of the topic. I press one button in this application and go to drink tea, at this time the application finds all the drawings in the Model space (which correspond to the specified dimensions A1, A2, etc.) and sends to the corresponding plotter. If the OP has chosen to use Autocad this way, and has identifid a $7.00 program that prints what he/she wants, what is this thread for??? ### RE: What are AutoCAD applications for batch drawing printing? (OP) Well, at least thanks to this thread, I learned that the alternative to the program for$ 7 is the option offered by SparWeb. And I realized that while the program is a faster solution to the problem at least in my case. #### Red Flag This Post Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework. #### Red Flag Submitted Thank you for helping keep Eng-Tips Forums free from inappropriate posts. The Eng-Tips staff will check this out and take appropriate action. #### Resources Solutions Brief - Protecting and Rescuing On-Ground Personnel Keeping our warfighters safe and delivering them a competitive advantage is a key goal of departments of defense around the world. It’s a goal shared by embedded computing manufacturers like Abaco: we never forget who we serve.This case study describes how a major international contractor integrated an Abaco single board computer at the heart of its CAS/CSAR solution. Download Now Datasheet - Top Enhancements Creo 7.0 PTC's Creo 7.0 has breakthrough innovations in the areas of generative design, real-time simulation, multibody design, additive manufacturing, and more! With Creo 7.0, you will be able to design the most innovative products faster than ever before, keeping you on the cutting edge of product design and ahead of your competition. Download Now Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
2020-10-29 22:51:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2763131558895111, "perplexity": 4480.281772800078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00203.warc.gz"}
https://www.physicsforums.com/threads/if-q-1-aq-d-then-columns-of-q-are-eigenvectors-of-a.545178/
# If (Q^-1)AQ=D, then columns of Q are Eigenvectors of A 1. Oct 29, 2011 ### blashmet 1. The problem statement, all variables and given/known data Prove: if (Q^-1)AQ=D, then each column of Q is an eigenvector of A. 2. Relevant equations A vector v is an eigenvector of A iff there exists a scalar λ such that: Av=λv 3. The attempt at a solution Suppose (Q^-1)AQ=D. We need to show each column of Q is an eigenvector of A. At this point, should I actually write out (Q^-1)AQ=D in general element form? That is, should I write out (Q^-1),A,Q,D in the form of nxn matrices? I'm honestly not even sure how to begin the proof, but any help would be appreciated. Thanks! 2. Oct 29, 2011 ### kru_ Well, since Q is invertible, we have that the columns of Q are linearly independent. Since Q is nxn, we have a linearly independent list of n vectors in an n-dimension subspace, therefore we have a basis for the subspace. Does that help to get you started? 3. Oct 29, 2011 ### deluks917 AQ = QD. The other (maybe better) way to see it is to think about change of basis. 4. Oct 29, 2011 ### I like Serena Welcome to PF, blashmet! As deluks917 said: AQ=QD Now write Q as (q1 q2 ...) where q1, q2 are the column vectors. And write D out as a diagonal matrix with lambda1, lambda2, etcetera. Leave A as it is. Now write out the matrix multiplication... 5. Oct 29, 2011 ### blashmet Thanks for the help guys! This is what I have so far: Q(Q^-1)AQ=QD iff AQ=QD=diag[d1,...,dn] AQ=A[q1,...,qn]=[Aq1,...,Aqn] QD=[d1q1,d2q2,...,dnqn] so AQ=QD iff [Aq1,...,Aqn]=[d1q1,...,dnqn] I'm stuck at this point. How do I proceed from here? Thanks! :) 6. Oct 29, 2011 ### I like Serena Good! You have to prove that each column of Q is an eigenvector of A. q1 is the first column of Q. Does it satisfy the criteria for an eigenvector of A? 7. Oct 29, 2011 ### blashmet Hi Serena! In a formal proof of this, do I need to have all of this first?: Q(Q^-1)AQ=QD iff AQ=QD=diag[d1,...,dn] AQ=A[q1,...,qn]=[Aq1,...,Aqn] QD=[d1q1,d2q2,...,dnqn] so AQ=QD iff [Aq1,...,Aqn]=[d1q1,...,dnqn] Now, is there where I show q1 is an eigenvector for A? If so, I'm confused, because the last line of the proof has [d1q1,...,dnqn]. Don't I need to show d1q1 is an eigenvector? 8. Oct 29, 2011 ### deluks917 What is the first column of AQ? What is the first coulumn of QD? 9. Oct 29, 2011 ### blashmet Hi deluks917, i'm not sure how you'd write it, but i know that AQ and QD are nxn square, so it's probably just a shorthand notation for the general element form of the multiplication of 2 nxn square matrices. can you answer my other questions? (it's ok if you don't know, i just wasn't sure if you saw them 10. Oct 29, 2011 ### deluks917 I'm not sure why you are confused. You wrote AQ = QD implies: [Aq1,...,Aqn]=[d1q1,...,dnqn] or Aq1 = d1q1 ... Aqn = dqn Do you see the eigenvectors? 11. Oct 30, 2011 ### blashmet It looks like the eigenvalues are dn (the diagonal components of the matrix D). So I suppose the eigenvectors would be qn (column vectors). I guess my problem is this: I need to write out a formal proof from beginning to end, and I'm not sure what to add to what I've already written (or if I need to do more than this). Can you help me with that? :) 12. Oct 30, 2011 ### I like Serena You are done. d1...dn are the eigenvalues and q1...qn are the eigenvectors, since they satisfy Av=λv. So each column of Q is an eigenvector of A. 13. Oct 30, 2011 ### blashmet Does the following suffice as a complete, formal proof? "d1...dn are the eigenvalues and q1...qn are the eigenvectors, since they satisfy Av=λv. So each column of Q is an eigenvector of A." Thanks Serena! :) 14. Oct 30, 2011 ### I like Serena Yes. 15. Oct 30, 2011 ### blashmet Ok great! Thanks Serena!
2017-11-18 23:44:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028616905212402, "perplexity": 1771.0971915892464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805114.42/warc/CC-MAIN-20171118225302-20171119005302-00416.warc.gz"}
https://standards.globalspec.com/std/10342665/nen-en-480-1
# NEN-EN 480-1 ## Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing active, Most Current Organization: NEN Publication Date: 1 November 2014 Status: active Page Count: 15 ICS Code (Cement. Gypsum. Lime. Mortar): 91.100.10 ICS Code (Concrete and concrete products): 91.100.30 ##### scope: NEN-EN 480-1 specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of admixtures in accordance with the series EN 934. ### Document History NEN-EN 480-1 November 1, 2014 Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing NEN-EN 480-1 specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of admixtures... March 1, 2014 Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing NEN-EN 480-1 specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of admixtures... July 1, 2011 Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing This European Standard specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of... November 1, 2006 Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing This European Standard specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of... February 1, 2005 Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing This European Standard specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of... August 1, 1998 Admixtures for concrete, mortar and grout - Test methods - Part 1: Reference concrete and reference mortar for testing This standard specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of... January 1, 1993 Admixtures for concrete, mortar and grout - Test methods - Reference concrete and reference mortar for testing This standard specifies the constituent materials, the composition and the mixing method to produce reference concrete and reference mortar for testing the efficacy and the compatibility of...
2019-03-18 15:51:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844484269618988, "perplexity": 8325.938731375423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201455.20/warc/CC-MAIN-20190318152343-20190318174343-00428.warc.gz"}
https://m.dexlabanalytics.com/blog/tag/machine-learning-institute-in-gurgaon/page/6
Machine Learning institute in Gurgaon Archives - Page 6 of 9 - DexLab Analytics | Big Data Hadoop SAS R Analytics Predictive Modeling & Excel VBA ## Python Statistics Fundamentals: How to Describe Your Data? (Part II) In the first part of this article, we have seen how to describe and summarize datasets and how to calculate types of measures in descriptive statistics in Python. It’s possible to get descriptive statistics with pure Python code, but that’s rarely necessary. Python is an advanced programming language extensively used in all of the latest technologies of Data Science, Deep Learning and Machine learning. Furthermore, it is particularly responsible for the growth of the Machine Learning course in IndiaMoreover, numerous courses like Deep Learning for Computer vision with Python, Text Mining with Python course and Retail Analytics using Python are pacing up with the call of the age. You must also be in line with the cutting-edge technologies by enrolling with the best Python training institute in Delhi now, not to regret it later. In this part, we will see the Python statistics libraries which are comprehensive, popular, and widely used especially for this purpose. These libraries give users the necessary functionality when crunching data. Below are the major Python libraries that are used for working with data. #### NumPy and SciPy – Fundamental Scientific Computing NumPy stands for Numerical Python. The most powerful feature of NumPy is the n-dimensional array. This library also contains basic linear algebra functions, Fourier transforms, advanced random number capabilities. NumPy is much faster than the native Python code due to the vectorized implementation of its methods and the fact that many of its core routines are written in C (based on the CPython framework). For example, let’s create a NumPy array and compute basic descriptive statistics like mean, median, standard deviation, quantiles, etc. SciPy stands for Scientific Python, which is built on NumPy. NumPy arrays are used as the basic data structure by SciPy. Scipy is one of the most useful libraries for a variety of high-level science and engineering modules like discrete Fourier transforms, Linear Algebra, Optimization and Sparse matrices. Specifically in statistical modelling, SciPy boasts of a large collection of fast, powerful, and flexible methods and classes. It can run popular statistical tests such as t-test, chi-square, Kolmogorov-Smirnov, Mann-Whitney rank test, Wilcoxon rank-sum, etc. It can also perform correlation computations, such as Pearson’s coefficient, ANOVA, Theil-Sen estimation, etc. #### Pandas – Data Manipulation and Analysis Pandas library is used for structured data operations and manipulations. It is extensively used for data preparation. The DataFrame() function in Pandas takes a list of values and outputs them in a table. Seeing data enumerated in a table gives a visual description of a data set and allows for the formulation of research questions on the data. The describe() function outputs various descriptive statistics values, except for the variance. The variance is calculated using the var() function in Pandas. The mean() function, returns the mean of the values for the requested axis. #### Matplotlib – Plotting and Visualization Matplotlib is a Python library for creating 2D plots. It is used for plotting a wide variety of graphs, starting from histograms to line plots to heat plots. One can use Pylab feature in IPython notebook (IPython notebook –pylab = inline) to use these plotting features inline. If the inline option is ignored, then pylab converts IPython environment to an environment, very similar to Matlab. matplotlib.pylot is a collection of command style functions. If a single list array is provided to the plot() command, matplotlib assumes it is a sequence of Y values and internally generates the X value for you. Each function makes some change to a figure, like creating a figure, creating a plotting area in a figure, decorating the plot with labels, etc. Now, let us create a very simple plot for some given data, as shown below: #### Scikit-learn – Machine Learning and Data Mining Scikit-learn built on NumPy, SciPy and matplotlib. Scikit-learn is the most widely used Python library for classical machine learning. But, it is necessary to include it in the discussion of statistical modeling as many classical machine learning (i.e. non-deep learning) algorithms can be classified as statistical learning techniques. This library contains a lot of efficient tools for machine learning and statistical modeling including classification, regression, clustering and dimensional reduction. #### Conclusion In this article, we covered a set of Python open-source libraries that form the foundation of statistical modelling, analysis, and visualization. On the data side, these libraries work seamlessly with the other data analytics and data engineering platforms, such as Pandas and Spark (through PySpark). For advanced machine learning tasks (e.g. deep learning), NumPy knowledge is directly transferable and applicable in popular packages such as TensorFlow and PyTorch. On the visual side, libraries like Matplotlib, integrate nicely with advanced dashboarding libraries like Bokeh and Plotly. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html ## Decoding Advanced Loss Functions in Machine Learning: A Comprehensive Guide Every Machine Learning algorithm (Model) learns by the process of optimizing the loss functions. The loss function is a method of evaluating how accurate the given prediction is made. If predictions are off, then loss function will output a higher number. If they’re pretty good, it’ll output a lower number. If someone makes changes in the algorithm to improve the model, loss function will show the path in which one should proceed. Machine Learning is growing as fast as ever in the age we are living, with a host of comprehensive Machine Learning course in India pacing their way to usher the future. Along with this, a wide range of courses like Machine Learning Using Python, Neural Network Machine Learning Python is becoming easily accessible to the masses with the help of Machine Learning institute in Gurgaon and similar institutes. We are having different types of loss functions. • Regression Loss Functions • Binary Classification Loss Functions • Multi-class Classification Loss Functions #### Regression Loss Functions 1. Mean Squared Error 2. Mean Absolute Error 3. Huber Loss Function #### Binary Classification Loss Functions 1. Binary Cross-Entropy 2. Hinge Loss #### Multi-class Classification Loss Functions 1. Multi-class Cross Entropy Loss 2. Kullback Leibler Divergence Loss #### Mean Squared Error Mean squared error is used to measure the average of the squared difference between predictions and actual observations. It considers the average magnitude of error irrespective of their direction. This expression can be defined as the mean value of the squared deviations of the predicted values from that of true values. Here ‘n’ denotes the total number of samples in the data. #### Mean Absolute Error Absolute Error for each training example is the distance between the predicted and the actual values, irrespective of the sign. ### MAE = | y-f(x) | Absolute Error is also known as the L1 loss. The MAE cost is more robust to outliers as compared to MSE. #### Huber Loss Huber loss is a loss function used in robust regression. This is less sensitive to outliers in data than the squared error loss. The Huber loss function describes the penalty incurred by an estimation procedure f. Huber (1964) defines the loss function piecewise by: This function is quadratic for small values of a, and linear for large values, with equal values and slopes of the different sections at the two points where |a|= 𝛿. The variable “a” often refers to the residuals, that is to the difference between the observed and predicted values a=y-f(x), so the former can be expanded to: – #### Binary Classification Loss Functions Binary classifications are those predictive modelling problems where examples are assigned one of two labels. #### Binary Cross-Entropy Cross-Entropy is the loss function used for binary classification problems. It is intended for use with binary classification. Mathematically, it is the preferred loss function under the inference framework of maximum likelihood. Cross-entropy will calculate a score that summarizes the average difference between the actual and predicted probability distributions for predicting class 1. The score is minimized and a perfect cross-entropy value is 0. #### Hinge Loss The hinge loss function is popular with Support Vector Machines (SVMs). These are used for training the classifiers, ### l(y) = max(0, 1- t•y) where ‘t’ is the intended output and ‘y’ is the classifier score. Hinge loss is convex function but is not differentiable which reduces its options for minimizing with few methods. #### Multi-Class Classification Loss Functions Multi-Class classifications are those predictive modelling problems where examples are assigned one of more than two classes. #### Multi-Class Cross-Entropy Cross-Entropy is the loss function used for multi-class classification problems. It is intended for use with multi-class classification. Mathematically, it is the preferred loss function under the inference framework of maximum likelihood. Cross-entropy will calculate a score that summarizes the average difference between the actual and predicted probability distributions for all classes. The score is minimized and a perfect cross-entropy value is 0. #### Kullback Leibler Divergence Loss KL divergence is a natural way to measure the difference between two probability distributions. A KL divergence loss of 0 suggests the distributions are identical. In practice, the behaviour of KL Divergence is very similar to cross-entropy. It calculates how much information is lost (in terms of bits) if the predicted probability distribution is used to approximate the desired target probability distribution. There are also some advanced loss functions for machine learning models which are used for specific purposes. 1. Robust Bi-Tempered Logistic Loss based on Bregman Divergences 2. Minimax loss for GANs 3. Focal Loss for Dense Object Detection 4. Intersection over Union (IoU)-balanced Loss Functions for Single-stage Object Detection 5. Boundary loss for highly unbalanced segmentation 6. Perceptual Loss Function #### Robust Bi-Tempered Logistic Loss based on Bregman Divergences In this loss function, we introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single-layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. #### Minimax loss for GANs Minimax GAN loss refers to the minimax simultaneous optimization of the discriminator and generator models. Minimax refers to an optimization strategy in two-player turn-based games for minimizing the loss or cost for the worst case of the other player. For the GAN, the generator and discriminator are the two players and take turns involving updates to their model weights. The min and max refer to the minimization of the generator loss and the maximization of the discriminator’s loss. #### Focal Loss for Dense Object Detection The Focal Loss is designed to address the one-stage object detection scenario in which there is an extreme imbalance between foreground and background classes during training (e.g., 1:1000). Therefore, the classifier gets more negative samples (or more easy training samples to be more specific) compared to positive samples, thereby causing more biased learning. The large class imbalance encountered during the training of dense detectors overwhelms the cross-entropy loss. Easily classified negatives comprise the majority of the loss and dominate the gradient. While the weighting factor (alpha) balances the importance of positive/negative examples, it does not differentiate between easy/hard examples. Instead, we propose to reshape the loss function to down-weight easy examples and thus, focus training on hard negatives. More formally, we propose to add a modulating factor (1 − pt) γ to the cross-entropy loss, with tunable focusing parameter γ ≥ 0. We define the focal loss as ### FL(pt) = −(1 − pt) γ log(pt) #### Intersection over Union (IoU)-balanced Loss Functions for Single-stage Object Detection The IoU-balanced classification loss focuses on positive scenarios with high IoU can increase the correlation between classification and the task of localization. The loss aims at decreasing the gradient of the examples with low IoU and increasing the gradient of examples with high IoU. This increases the localization accuracy of models. #### Boundary loss for highly unbalanced segmentation Boundary loss takes the form of a distance metric on the space of contours (or shapes), not regions. This can mitigate the difficulties of regional losses in the context of highly unbalanced segmentation problems because it uses integrals over the boundary (interface) between regions instead of unbalanced integrals over regions. Furthermore, a boundary loss provides information that is complementary to regional losses. Unfortunately, it is not straightforward to represent the boundary points corresponding to the regional softmax outputs of a CNN. Our boundary loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations involving contour points. This yields a boundary loss expressed with the regional softmax probability outputs of the network, which can be easily combined with standard regional losses and implemented with any existing deep network architecture for N-D segmentation. We report comprehensive evaluations on two benchmark datasets corresponding to difficult, highly unbalanced problems: the ischemic stroke lesion (ISLES) and white matter hyperintensities (WMH). Used in conjunction with the region-based generalized Dice loss (GDL), our boundary loss improves performance significantly compared to GDL alone, reaching up to 8% improvement in Dice score and 10% improvement in Hausdorff score. It also yielded a more stable learning process. #### Perceptual Loss Function We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pre-trained networks. We combine the benefits of both approaches and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results. #### Conclusion Loss function takes the algorithm from theoretical to practical and transforms neural networks from matrix multiplication into deep learning. In this article, initially, we understood how loss functions work and then, we went on to explore a comprehensive list of loss functions also we have seen the very recent — advanced loss functions. References: – https://arxiv.org https://www.wikipedia.org ## A Step-by-Step Guide on Python Variables Variable is the name given to the memory location where data is stored. Once a variable is stored, space is allocated in memory. Variables are named locations that are used to store references to the object stored in memory. With the rapid rise of the advanced programming techniques, matching with the pacing advancements of Machine Learning and Artificial Intelligence, the need for Python for Data Analysis an Machine Learning Using Python is growing. However, when it comes to trustworthy courses, it is better to go for the best Python Certification Training in Delhi. • Rules to Define a Variable • Assigning Values to a Variable • Re-declaring a Variable in Python • Variable Scope • Deleting a Variable #### Rules to Define a Variable These are the few rules to define a python variable: 1. Python variable name can contain small case letters (a-z), upper case letters (A-Z), numbers (0-9), and underscore (_). 3. We can’t use reserved keywords as a variable name. 4. The variable name can be of any length. 5. Python variable can’t contain only digits. 6. The variable names are case sensitive. #### Assigning Values to a Variable There is no need for an explicit declaration to reserve memory. The assignment is done using the equal to (=) operator. #### Multiple Assignment in Python Multiple variables can be assigned to the same variable. #### Multi-value Assignment in Python Multiple variables can be assigned to multiple objects. #### Re-declaring a Variable in Python After declaring a variable, one can again declare it and assign a new value to it. Python interpreter discards the old value and only considers the new value. The type of the new value can be different than the type of the old value. #### Variable Scope A variable scope defines the area of accessibility of the variable in the program. A Python variable has two scopes: 1. Local Scope 2. Global Scope #### Python Local Variable When a variable is defined inside a function or a class, then it’s accessible only inside it. They are called local variables and their scope is only limited to that function or class boundary. If we try to access a local variable outside its scope, we get an error that the variable is not defined. #### Python Global Variable When the variable is not inside a function or a class, it’s accessible from anywhere in the program. These variables are called global variables. #### Deleting a Variable One can delete variable using the command “del”. In the example below, the variable “d” is deleted by using command Del and when it is further proceeded to print, we get an error “variable name is not defined” which means the variable is already deleted. #### Conclusion In this article we have learned the concepts of Python variables which are used in every program. We also learned the rules associated to the naming of a variable, assigning value to a variable, scope of a variable and deleting a variable. So, if you are also hooked into Python and looking for the best courses, Python course in Gurgaon is certainly a gem of a course! This technical blog is sourced from: www.askpython.com and intellipaat.com ## An In-depth Analysis of Game Theory for AI Game Theory is a branch of mathematics used to model the strategic interaction between different players in a context with predefined rules and outcomes. With the rapid rise of AI, along with the extensive time and research we are devoting to it, Game Theory is experiencing steady growth. If you are also interested in AI and want to be well-versed with it, then, opt for the Best Artificial Intelligence Training Institute in Gurgaon now! Games have been one of the main areas of focus in artificial intelligence research. They often have simple rules that are easy to understand and train for. It is clear when one party wins, and frankly, it is fun watching a robot beat a human at chess. This trend of AI research being directed towards games is not at all an accident. Researchers know that the underlying principles of many tasks lie in understanding and mastering game theory. Both AI and game theory seek to find out how participants will react in different situations, figuring out the best response to situations, optimizing auction prices and finding market-clearing prices. ### Some Useful Terms in Game Theory • Game: Like games in popular understanding, it can be any setting where players take actions and its outcome will depend on them. • Player: A strategic decision-maker within a game. • Strategy: A complete plan of actions a player will take, given the set of circumstances that might arise within the game. • Payoff: The gain a player receives from arriving at a particular outcome of a game. • Equilibrium: The point in a game where both players have made their decisions and an outcome is reached. • Dominant Strategy: When one strategy is better than another strategy for one player, regardless of the opponent’s play, the better strategy is known as a dominant strategy. • Agent: Agent is equivalent to a player. • Reward: A payoff of a game can also be termed as a reward. • State: All the information necessary to describe the situation an agent is in. • Action: Equivalent of a move in a game. • Policy: Similar to a strategy. It defines the action an agent will make when in particular states • Environment: Everything the agent interacts with during learning. ### Different Types of Games in Game Theory In the game theory, different types of games help in the analysis of different types of problems. The different types of games are formed based on number of players involved in a game, symmetry of the game, and cooperation among players. #### Cooperative and Non-Cooperative Games Cooperative games are the ones in which the players are convinced to adopt a particular strategy through negotiations and agreements between them. Non-Cooperative games refer to the games in which the players decide on their strategy to maximize their profit. Non-cooperative games provide accurate results. This is because in non-cooperative games, a very deep analysis of a problem takes place. #### Normal Form and Extensive Form Games Normal form games refer to the description of the game in the form of a matrix. In other words, when the payoff and strategies of a game are represented in a tabular form, it is termed as normal form games. Extensive form games are the ones in which the description of the game is done in the form of a decision tree. Extensive form games help in the representation of events that can occur by chance. #### Simultaneous Move Games and Sequential Move Games Simultaneous games are the ones in which the move of two players (the strategy adopted by two players) is simultaneous. In a simultaneous move, players do not know the move of other players. Sequential games are the ones in which the players do not have a deep knowledge about the strategies of other players. #### Constant Sum, Zero Sum, and Non-Zero Sum Games Constant sum games are the ones in which the sum of outcome of all the players remains constant even if the outcomes are different. Zero sum games are the ones in which the gain of one player is always equal to the loss of the other player. Non-zero sum games can be transformed to zero sum game by adding one dummy player. The losses of the dummy player are overridden by the net earnings of players. Examples of zero sum games are chess and gambling. In these games, the gain of one player results in the loss of the other player. #### Symmetric and Asymmetric Games Symmetric games are the ones where the strategies adopted by all the players are the same. Symmetry can exist in short-term games only because in long-term games the number of options with a player increases. Asymmetric games are the ones where the strategies adopted by players are different. In asymmetric games, the strategy that provides benefit to one player may not be equally beneficial for the other player. ### Game Theory in Artificial Intelligence Development of the majority of the popular games which we play in this digital world is with the help of AI and game theory. Game theory is used in AI whenever there is more than one person involved in solving a logical problem. There are various algorithms of Artificial Intelligence which are used in Game Theory. Minimax algorithm in Game Theory is one of the oldest algorithms in AI and is used generally for two players. Also, game theory is not only restricted to games but also relevant to the other large applications of AI like GANs (Generative Adversarial Networks). GAN consists of 2 models, a discriminative model and a generative model. These models are participants on the training phase which looks like a game between them, and each model tries to better than the other. The target of the generative model is to generate samples that are considered to be fake and are supposed to have the same distribution of the original data samples; on the other hand, the target of discriminative is to enhance itself to be able to recognize the real samples among the fake samples generated by the generative model. It looks like a game, in which each player (model) tries to be better than the other, the generative model tries to generate samples that deceive and tricks the discriminative model, while the discriminative model tries to get better in recognizing the real data and avoid the fake samples. It is the same idea of the Minimax algorithm, in which each player targets to outclass the other and minimize the supposed loss. This game continues until a state where each model becomes an expert on what it is doing. The generative model increases its ability to get the actual data distribution and produces data like it, and the discriminative becomes an expert in identifying the real samples, which increases the system’s classification task. In such a case, each model satisfied by its output (strategy), this is called Nash Equilibrium in Game Theory. ### Nash Equilibrium Nash equilibrium, named after Nobel winning economist, John Nash, is a solution to a game involving two or more players who want the best outcome for themselves and must take the actions of others into account. When Nash equilibrium is reached, players cannot improve their payoff by independently changing their strategy. This means that it is the best strategy assuming the other has chosen a strategy and will not change it. For example, in the Prisoner’s Dilemma game, confessing is Nash equilibrium because it is the best outcome, taking into account the likely actions of others. ### Conclusion So in this article, the fundamentals of Game Theory and essential topics are covered in brief. Also, this article gives an idea of the influence of game theory artefacts in the AI space and how Game Theory is being used in the field of Machine Learning and its real-world implementations. Machine Learning is an ever-expanding application of Artificial Intelligence with numerous applications in the other existing fields. Besides, Machine Learning Using Python is also on the verge of proving itself to be a foolproof technology in the coming years. So, don’t wait and enrol in the world-class Artificial Intelligence Certification in Delhi NCR now and rest assured! ## Statistical Application in R & Python: Negative Binomial Distribution Negative binomial distribution is a special case of Binomial distribution. If you haven’t checked the Exponential Distribution, then read through the Statistical Application in R & Python: EXPONENTIAL DISTRIBUTION. It is important to know that the Negative Binomial distribution could be of two different types, i.e. – Type 1 and Type 2. In many ways, it could be seen as a generalization of the geometric distribution. The Negative Binomial Distribution essentially operates on the same principals as the binomial distribution but the objective of the former is to model for the success of an event happening in “n” number of trials. Here it is worth observing that the Geometric distribution models for the first success whereas a Negative Binomial distribution models for the Kth This is explained below. Type 1 Binomial distribution aims to model the trails up to and including the “kth success” in “n number of trials”. To give a simple example, imagine you are asked to predict the probability that the fourth person to hear a gossip will believe that! This kind of prediction could be made using the negative binomial type 1 distribution. Conversely, Type 2 Binomial distribution is used to model the number of failures before the “kth success”. To give an example, imagine you are asked about how many penalty kicks it will take before a goal is scored by a particular football player. This could be modeled using a negative binomial type 2 distribution, which might be pretty tricky or almost impossible with any other methods. The probability distribution function is given below: In the next section, we will take you through its practical application in Python and R. #### Application: Mr. Singh works in an Insurance Company where his target is to sale a minimum of five policies in a day. On a particular day, he had already sold 2 policies after numerous attempts. The probability of sales on each policy is 0.6. Now, if the policies may be considered as independent Bernoulli trials, then: 1. What is the probability that he has exactly 4 failed attempts before his 3rd successful sales of the day? 2. What is the probability that he was fewer than 4 failed attempts before his 3rd successful sales of the day? So, the number of sales = 3. The probability of failed attempts is 4. The success of each sale is 0.6. #### Calculate Negative Binomial Distribution in R: In R, we calculate negative binomial distribution to find the probability of insurance sales. Thus, we get, 1. The probability that he has exactly 4 failed attempts before his 3rd successful sales are 8.29%. 2. The probability that he has fewer than 4 failed attempts before his 3rd successful sales is 82.08%. Hence, we can see that chances are quite high that Mr. Singh will succeed in making a sale after 4 failed attempts. #### Calculate Negative Binomial Distribution in Python: In Python, we get the same results as above. #### Conclusion: Negative Binomial distribution is the discrete probability distribution that is actually used for calculating the success and failure of any observation. When applied to real-world problems, the outcomes of the successes and failures may or may not be the outcomes we ordinarily view as good and bad, respectively. Suppose we used the negative binomial distribution to model the number of days a certain machine works before it breaks down. In this case, “success” would be the days that the machine worked properly, whereas the day when the machine breaks down would be a “failure”. Another example would be, if we used the negative binomial distribution to model the number of attempts an athlete makes on goal before scoring r goals, though, then each unsuccessful attempt would be a “success”, and scoring a goal would be “failure”. This blog will surely aid in developing a better understanding of how negative binomial distribution works in practice. If you have any comments please leave them below. Besides, if you are interested in catching up with the cutting edge technologies, then reach the premium training institute of Data Science and Machine Learning leading the market with the top-notch Machine Learning course in India. ## Artificial Intelligence Jobs: Data Science and Beyond! Artificial Intelligence is the latest technology that the industry of computer science has been working on for quite some time now. Though it has not yet been possible to materialize the high-end AIs, weak/narrow Artificial Intelligence which includes, Siri, Cortana, Bixby, Tesla, are the ones that have grown to be simply inseparable in our daily lives. This is simultaneous with the widespread of the Artificial intelligence Course in Delhiwhich is encouraging more and more students to explore new-age technologies. With the extensive research and tests carried out on all these new technologies to implement them in the modern industries; AI is yielding more jobs than ever before. #### Jobs Springing from the Artificial Intelligence Artificial intelligence and data always go hand in hand because it is the data that helps us gain insight into the results. Thus, it is not surprising that the professionals utter AI and data at the same instant. When Amazon mentioned of up-skilling 100,000 employees from the United States to make them ready for the technology of the age, they also claimed that the machines with the ability to deal with data are responsible for most of these jobs. There have been huge changes in the figures since then, with the data mapping scientists increased to 832%, the total data scientists jumped by 505%, and the total business analysts hiked about 160%. Besides, there is also a marked demand for the other employees, who are from a non-technological background. However, most of these are associated with Artificial Intelligence, like logistics coordinator and executive; process improvement manager; transportation specialist and so on. Thus, in contradiction to our surmises that AI and its likes will throttle our jobs and crumble every other our opportunities of the same are turning out to be false for good! #### Drawing to a Close Whether it is Machine Learning, Data Science or Artificial Intelligence, we are noticing a rapid progress and can easily count on a better future rich with technology. However, with the increasing hardware, software and advanced computing, the need to grasp the pacing technology thoroughly is becoming predominant. Thus, Machine Learning Using PythonNeural Network Machine Learning Python and Data Science Courses in Gurgaon are rising in demand to meet the need of the mass. However, you should always go for the best Artificial Intelligence Training Institute in Gurgaon to imbibe a wholesome knowledge of the subject. . ## How to Structure Python Programs? An Extensive Guide Python is an extremely readable and versatile high-level programming language. It supports both Object-oriented programming as well as Functional programming. It is generally referred to as an interpreted language which means that each line of code is executed one by one and if the interpreter finds an error it stops proceeding further and gives an error message to the user. This makes Python a widely regarded language, fueling Machine Learning Using Python, Text Mining with Python course and more. Furthermore, with such a high-end programming language, Python for data analysis looks ahead for a bright future. #### In the Structure of Python Computer languages have a structure just like human languages. Therefore, even in Python, we have comments, variables, literals, operators, delimiters, and keywords. To understand the program structure of Python we will look at the following in this article: – 1. Python Statement • Simple Statement • Compound Statement 2. Multiple Statements Per Line 3. Line Continuation • Implicit Line Continuation • Explicit Line Continuation 5. Whitespace 6. Indentation 7. Conclusion #### Python Statement A statement in Python is a logical instruction that the interpreter reads and executes. The interpreter executes statements sequentially, one by one. In Python, it could be an assignment statement or an expression. The statements are mostly written in such a style so that each statement occupies a single line. ##### Simple Statements A simple statement is one that contains no other statements. Therefore, it lies entirely within a logical line. An assignment is a simple statement that assigns values to variables, unlike in some other languages; an assignment in Python is a statement and can never be part of an expression. ##### Compound Statement A compound statement contains one or more other statements and controls their execution. A compound statement has one or more clauses, aligned at the same indentation. Each clause has a header starting with a keyword and ending with a colon (:), followed by a body, which is a sequence of one or more statements. When the body contains multiple statements, also known as blocks, these statements should be placed on separate logical lines after the header line, indented four spaces rightward. #### Multiple Statements per Line Although it is not considered good practice multiple statements can be written in a single line in Python. It is advisable to avoid multiple statements in a single line. But, if it is necessary, then it can be written with the help of semicolon (;) as the terminator of every statement. #### Line Continuation In Python there might be some cases when a single statement is too long that does not fit the browser window and one needs to scroll the screen left or right. This can be a case of assignment statement with many terms or defining a lengthy nested list. These long statements of code are generally considered a poor practice. To maintain readability, it is advisable to split the long statement into parts across several lines. In Python code, a statement can be continued from one line to the next in two different ways: implicit and explicit line continuation. ##### Implicit Line Continuation This is the more straightforward technique for line continuation. In implicit line continuation, one can split a statement using either of parentheses ( ), brackets [ ] and braces { }. Here, one needs to enclose the target statement using the mentioned construct. ##### Explicit Line Continuation In cases where implicit line continuation is not readily available or practicable, there is another option. This is referred to as an explicit line continuation or explicit line joining. Here, one can right away use the line continuation character (\) to split a statement into multiple lines. A comment is text that doesn’t affect the outcome of a code; it is just a piece of text to let someone know what you have done in a program or what is being done in a block of code. This is especially helpful when a code is written and someone is analyzing it for bug fixing or making a change in logic, by reading a comment one can understand the purpose of code much faster than by just going through the actual code. There are two types of comments in Python. 1. Single line comment 2. Multiple line comment #### Single line comment In python, one can use # special character to start the comment. #### Multi-line comment To have a multi-line comment in Python, one can use Triple Double Quotation at the beginning and the end of the comment. #### Whitespace One can improve the readability of the code with the use of whitespaces. Whitespaces are necessary for separating the keywords from the variables or other keywords. Whitespace is mostly ignored by the Python interpreter. #### Indentation Most of the programming languages provide indentation for better code formatting and don’t enforce to have it. However, in Python, it is mandatory to obey the indentation rules. Typically, we indent each line by four spaces (or by the same amount) in a block of code. Also for creating compound statements, the indentation will be of utmost necessity. #### Conclusion So, this article was all about how to structure the Python program. Here, one can learn what constitutes a valid Python statement and how to use implicit and explicit line continuation to write a statement that spans multiple lines. Furthermore, one can also learn about commenting Python code, and about the use of whitespace and indentation to enhance the overall readability. We hope this article was helpful to y ou. If you are interested in similar blogs, stay glued to our website, and keep following all the news and updates from Dexlab Analytics. ## Machine Learning in the Healthcare Sector The healthcare industry is one of the most important industries when it comes to human welfare. Research analysis from the U.S. federal government actuaries say that Americans spent $3.65 trillion on health care in 2018(report from Axios) and the Indian healthcare market is expected to reach$ 372 billion by 2022. To reduce cost and to move towards a personalized healthcare system, the industry faces three major hurdles: – 1) Electronic record management 2) Data integration 3) Computer-aided diagnoses. Machine learning in itself is a vast field with a wide array of tools, techniques, and frameworks that can be exploited and manipulated to cope with these challenges. In today’s time, Machine Learning Using Python is proving to be very helpful in streamlining the administrative processes in hospitals, map and treat life-threatening diseases and personalizing medical treatments. This blog will focus primarily on the applications of Machine learning in the domain of healthcare. #### Real-life Application of Machine learning in the Health Sector 1. MYCIN system was incepted at Stanford University. The system was developed in order to detect specific strains of bacteria that cause infections. It proposed a good therapy in 69% of the cases which was at that time better than infectious disease experts. 2. In the 1980s at the University of Pittsburgh, a diagnostic tool named INTERNIST-I was developed to diagnose symptoms of various diseases like flu, pneumonia, diabetes and more. One of the key functionalities of the INTERNIST-I was to be able to detect the problem areas. This is done with a view of being able to remove diagnostics’ likelihood. 3. AI trained by researchers from Pennsylvania has been developed recently which is capable of predicting patients who are most likely to die within a year. This is assessed based on their heart test results. This AI is capable of predicting the death of patients even if the figures look quite normal to the doctors. The researchers have trained the AI with 1.77 million electrocardiograms (ECG) results. The researchers have made two versions of this Al: one with just the ECG data and the other one with ECG data along with the age and gender of the patients. 4. P1vital’s PReDicT (Predicting Response to Depression Treatment) built on the Machine Learning algorithms aims to develop a commercially feasible way to diagnose and provide treatment of depression in clinical practice. 5. KenSci has developed machine learning algorithms to predict illnesses and their cure to enable doctors with the ability to detect specific patterns and indicators of population health risks. This comes under the purview of model disease progression. 6. Project Hanover developed by Microsoft is using Machine Learning-based technologies for multiple purposes, which includes the development of AI-based technology for cancer treatment and personalizing drug combination for Acute Myeloid Leukemia (AML). 7. Preserving data in the health care industry has always been a daunting task. However, with the forward-looking steps in analytics-related technology, it has become more manageable over the years. The truth is that even now, a majority of the processes take a lot of time to complete. 8. Machine learning can prove to be disruptive in the medical sector by automating processes relating to data collection and collation. This is highly profitable in terms of cost-effectiveness. Newer algorithms such as Vector Machines or OCR recognition are designed to automate the task of document reading and classification with high levels of precision and accuracy. 9. PathAI’s technology uses machine learning to help pathologists make faster and more accurate diagnoses. Furthermore, it also helps in identifying patients who might benefit from a new and different type of treatments or therapies in the future. #### To Sum Up: As the modern technologies of Machine Learning, Artificial Intelligence and Big Data Analytics are tottering forth in multiple domains, there is a long path they need to walk to ensure an unflinching success. Besides, it is also important for every one of us to be accustomed to all these new-age technologies. With an expansion of the quality Machine Learning course in India and Neural Network Machine learning Python, all the reputed institutes are joining hands together to bring in the revolution. The initial days will be slow and hard, but it is no doubt that these cutting edge technologies will transform the medical industry along with a range of other industries, making early diagnoses possible along with a reduction of the overall cost. Besides, with the introduction of successful recommender systems and other promises of personalized healthcare, coupled with systematic management of medical records, Machine Learning will surely usher in the future for good! . ## 8 Amazing Things That Artificial Intelligence Can Do AI plays a crucial role in our everyday lives. By now, we are aware of AI’s glaring significance in our very existence. Nevertheless, you would be surprised to know that AI has already imbibed some of the skills that we, humans, possess. Ahead, we’ve 8 incredible skills that AI has learnt over the years: Wondering how to summarize all those kilobytes of information? AI-powered SummarizeBot is the answer. Whether its books, news articles, weblinks, audio/image files or legal documents, ATS (automatic text summarization) reads everything and records the important information. Natural Language Processing (NLP), artificial intelligence, machine learning and blockchain technologies are in play here. #### Write Did you know that myriad news enterprises and seasoned journalists rely on AI to write? The New York Times, Reuters, Washington Post and more have turned to artificial intelligence to craft interesting reading pieces. Also, AI is expected to enhance the process of creative writing as well.  Even, it has generated a novel that was shortlisted for a prestigious award. #### See Machine vision is in the hype. It is implemented in different ways in today’s world, such as facilitating self-driven cars, facial recognition for payment portals, police work and more. The main concept of machine vision is to let the computers ‘visualize’ the world, analyze key data and make decisions thereafter. #### Speak We are fortunate enough to have Google Maps and Alexa to give us directions and respond to our queries but Google Duplex takes it to a whole new level, courtesy AI. With the help of this robust technology, Duplex can schedule appointments and finish tasks on phone in a very interactive language. It can also respond perfectly to human behaviors. #### Hear and Understand Detecting gunshots and alerting to-the-purpose agencies is one of the greatest things achieved by AI. It means AI can hear and understand sound. It is very well evident in how digital voice assistants respond to your queries regarding weather or a day’s agenda. Working professionals simply love the efficiency, accuracy and convenience of automated meeting minutes provided by AI. #### Touch With the help of cameras and sensors, a robot can identify and handpick ‘supermarket ripe’ blueberries and put them in your basket. The creator of the robot even asserts that it is designed to pick one blueberry every 10 seconds for 24 hours a day! #### Smell A team of AI researchers are at present developing robust AI models that can detect illnesses – simply by smelling. The model is designed in such a way so that it can notice chemicals, known as aldehydes that cause human stress and diseases, including diabetes, cancer and brain injuries. AI bots can even identify other caustic chemicals or gas leaks. Of late, IBM is using AI to formulate new perfumes. #### Perceive Emotions Today, AI tools can observe human emotions and track them down as one watches videos. Artificial emotional intelligence can collect meaningful data from a person’s facial expressions or body language, analyze it to determine what emotion he/she is likely to express and then ascertain an action base on that detail. For more such interesting updates, follow DexLab Analytics. Our Machine Learning Using Python course is a bestseller. To know more, click here <www.dexlabanalytics.com>
2022-09-30 13:19:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3496408760547638, "perplexity": 1109.7500330406776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00244.warc.gz"}
https://fjp.at/posts/optimal-frenet/
# Trajectory Planning in the Frenet Space There are many ways to plan a trajectory for a robot. A trajectory can be seen as a set of time ordered state vectors $x$. The following algorithm introduces a way to plan trajectories to maneuver a mobile robot in a 2D plane. It is specifically useful for structured environments, like highways, where a rough path, referred to as reference, is available a priori. Path planning in frenet coordinates: ## Algorithm 1. Determine the trajectory start state $[x_1,x_2,\theta,\kappa,v,a](0)$ The trajectory start state is obtained by evaluating the previously calculated trajectory at the prospective start state (low-level-stabilization). At system initialization and after reinitialization, the current vehicle position is used instead (high-level-stabilization). 2. Selection of the lateral mode Depending on the velocity $v$ the time based ($d(t)$) or running length / arc length based ($d(s)$) lateral planning mode is activated. By projecting the start state onto the reference curve the the longitudinal start position $s(0)$ is determined. The frenet state vector $[s,\dot{s},\ddot{s},d,d’,d’’](0)$ can be determined using the frenet transformation. For the time based lateral planning mode, $[\dot{d}, \ddot{d}](0)$ need to be calculated. 3. Generating the laterl and longitudinal trajectories Trajectories including their costs are generated for the lateral (mode dependent) as well as the longitudinal motion (velocity keeping, vehicle following / distance keeping) in the frenet space. In this stage, trajectories with high lateral accelerations with respect to the reference path can be neglected to improve the computational performance. 4. Combining lateral and longitudinal trajectories Summing the partial costs of lateral and longitduinal costs using $J(d(t),s(t)) = J_d(d(t)) + k_s \cdot J_s(s(t))$, for all active longidtuinal mode every longitudinal trajectory is combined with every lateral trajectory and transfromed back to world coordinates using the reference path. The trajectories are verified if they obey physical driving limits by subsequent point wise evaluation of curvature and acceleration. This leads to a set of potentially drivable maneuvers of a specific mode in world coordinates. 5. Static and dynamic collision check Every trajectory set is evaluated with increasing total costs if static and dynamic collisions are avoided. The trajectory with the lowest cost is then selected. 6. Longitudinal mode alternation Using the sign based (in the beginning) jerk $\dot{a}(0)$, the trajectory with the strongest decceleration or the trajectory which accelerates the least respectively is selected and passed to the controller. # Frenet Coordinates “Frenet Coordinates”, are a way of representing position on a road in a more intuitive way than traditional (x,y) Cartesian Coordinates. With Frenet coordinates, we use the variables s and d to describe a vehicle’s position on the road or a reference path. The s coordinate represents distance along the road (also known as longitudinal displacement) and the d coordinate represents side-to-side position on the road (relative to the reference path), and is also known as lateral displacement. In the following sections the advantages and disadvantages of Frenet coordinates are compared to the Cartesian coordinates. ## Frenet Features The image below depicts a curvy road with a Cartesian coordinate system laid on top of it, as well as a curved (continuously curved) reference path (for example the middle of the road). The next image shows the same reference path together with its Frenet coordinates. The s coordinate represents the run length and starts with s = 0 at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with d = 0. d is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. The image above shows that curved reference paths (such as curvy roads) are represented as straight lines on the s axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non straight motions in Frenet coordinates. Instead such motions result in an offset from the reference path and therefore the s axis, which is described with the d coordinate. The following image shows the two different representations (Cartesian vs Frenet). To use Frenet coordinates it is required to have a continouosly smooth reference path. ### Reference Path Frenet coordinates provide a mathematically simpler representation of a reference path, because its run length is described with the s axis. This reference path provides a rough reference to follow an arbitrary but curvature continuous course of the road. To avoid collisions, the planner must take care of other objects in the environment, either static or dynamic. Such objects are usually not avoided by the reference path. A reference path can be represented in two different forms although for all representations a run length information, which represents the s axis, is required for the transformation. • Polynome • Spline (multiple polynomes) • Clothoid (special polynome) • Polyline (single points with run length information) #### Clothoid $x(l) = c0 + c1*l$ ### Transformation The transformation from local vehicle coordinates to Frenet coordinates is based on the relations shown in the following image: Given a point $P_C$ in the vehicle frame search for the closest point $R_C$ on the reference path. The run length of $R_C$, which is known from the reference path points, determins the s coordinate of the transformed point $P_F$. If the reference path is sufficiently smooth (continuously differentiable) then the vector $\vec{PR}$ is orthogonal to the reference path at the point $R_C$. The signed length of $\vec{PR}$ determines the d coordinate of $P_F$. The sign is positive, if $P_C$ lies on the left along the run lenght of the reference path. The procedure to transform a point $P_F$ from Frenet coordinates to the local vehicle frame in Cartesian coordinates is analogous. First, the point $R_C$, which lies on the reference path at run length $s$. Next, a normal unit vector $\vec{d}$ is determined, which, in this point, is orthogonal to the reference path. The direction of this vector points towards positive $d$ values and therefore points to the left with increasing run length $s$. Therefore, the vector $\vec{d}$ depends on the run length, which leads to: $P_C(s,d) = R_C(s) + d \cdot \vec{d}(s)$ ## Usage ### Jupyter Notebook In the python folder you find a Jupyter Notebook which shows the described planning algorithm. ### Frenet GUI Note: The Frenet GUI is not functional yet. Contributions are welcome. Here’s the plan for this GUI: Allows you to generate trajectories in a local (world) reference frame from two quintic polynomials. One polynomial describes the longitudinal direction and the other one the lateral. Providing a reference path and applying the Frenet coordinate transformation on this path will result in a trajectory. Execute frenet.sh run the GUI. This will call uic (Qt’s user interface compiler) to process the ui file. Afterwards the main.py will be executed. ### Dependencies The GUI was created with python3 in a conda environment: conda create -n frenetenv python=3.6 conda activate frenetenv conda install -c conda-forge pyside2 conda install -c conda-forge matplotlib You can use the provided environment.yml to create a conda environment with the required dependendies: conda create --name <env-name> --file environment.yml conda activate <env-name> ## References ### Python code: • https://github.com/AtsushiSakai/PythonRobotics#optimal-trajectory-in-a-frenet-frame • https://github.com/AtsushiSakai/PythonRobotics/tree/master/PathPlanning/FrenetOptimalTrajectory ## Helper Function https://de.mathworks.com/matlabcentral/fileexchange/22441-curve-intersections Tags: Categories: Updated:
2020-09-19 08:33:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6684105396270752, "perplexity": 1277.675461427925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00378.warc.gz"}
https://proofwiki.org/wiki/Finite_Ordinal_is_not_Subset_of_one_of_its_Elements
# Finite Ordinal is not Subset of one of its Elements ## Theorem Let $n$ be a finite ordinal. Then: $\nexists x \in n: n \subseteq x$ that is, $n$ is not a subset of one of its elements. ## Proof Let $S$ be the set of all those finite ordinals $n$ which are not a subset of any of its elements. That is: $n \in S \iff n \in \omega \land \forall x \in n: n \nsubseteq x$ We know that $0 = \varnothing$ is not a subset of any of its elements, as $\varnothing$ by definition has no elements. So $0 \in S$. Now suppose $n \in S$. $n \subseteq n$ But as $n \in S$ it follows by definition of $S$ that: $n \notin n$ By definition of the successor of $n$, it follows that: $n^+ \nsubseteq n$ Now from Subset Relation is Transitive: $n^+ \subseteq x \implies n \subseteq x$ But since $n \in S$ it follows that: $x \notin n$ So: $n^+ \nsubseteq n$ and: $\forall x \in n: n^+ \nsubseteq x$ So $n^+$ is not a subset of any of its elements. That is: $n^+ \in S$ So by the Principle of Mathematical Induction: $S = \omega$ Hence the result. $\blacksquare$
2020-07-13 01:07:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779123067855835, "perplexity": 210.04832520803646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00317.warc.gz"}
https://quant.stackexchange.com/questions/35708/assumption-in-black-scholes-solution?noredirect=1
# Assumption in black scholes solution Under the usual notations, In most textbooks on Quantative Finance, for deriving the Black-Scholes solution I find that authors, while setting up the riskless portfolio, assume that, $$\text{d} (\frac{\partial V}{\partial S} S_t) = \frac{\partial V}{\partial S} \text{d} S_t$$ At least can we prove this post facto, as in, does this equation hold true for famous Black Scholes equation The same issue is also pointed out here. • Uhm, so you're saying your question is a duplicate? I see that the question you link to also has an answer. Where do you require further clarification? – Bob Jansen Aug 20 '17 at 20:17 • The answers to that question are not accepted; Also, while the question is similar, what I am asking is a slightly more different one. If someone were to answer this question, probably a part of the linked question would be answered. – kasa Aug 20 '17 at 20:21 • Yes, that is precisely my question!!! – kasa Aug 20 '17 at 20:27 This is not true. Note that $\frac{\partial C}{\partial S_t} = N(d_1)$. Then \begin{align*} d\left(\frac{\partial C}{\partial S_t}S_t\right) &= \underbrace{S_t dN(d_1) + d\langle N(d_1), S\rangle_t} + N(d_1) dS_t\\ &\ne N(d_1) S_t. \end{align*} That is, \begin{align*} d\left(\frac{\partial C}{\partial S_t}S_t\right)\ne \frac{\partial C}{\partial S_t}dS_t. \end{align*} • $${S_t dN(d_1) + d\langle N(d_1), S\rangle_t}$$ can this be negligible ? Why I'm asking is that textbooks like Hull are rarely wrong in these matters. I hope I am making myself clear. – kasa Aug 21 '17 at 3:13
2019-06-26 16:04:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783920049667358, "perplexity": 2000.7056668423377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00265.warc.gz"}
http://www.juanofwords.com/2015/10/la-reyna-del-facebook/
If this isn’t a sign of the times, we’re not sure what is! The song is aptly titled “The Queen of Facebook” and as you might imagine, it’s all about that little obsession so many of us seem to have as of late: social media. More to the point. Facebook. Feis. Or El Feis. Whatever you choose to call it. The group performing the song is Los Cocineros Del Norte from Fresno, California. Here’s our interpretation of the lyrics in English: Not sure if you’ve noticed for being a flirt and a liar for being stuck up and conceded You are the Queen of Facebook pictures in only underwear And that’s how your ex is, and my ex and the next and that’s how your ex is, and my ex and the next I think you like that people click like on everything that everyone look at you and start to comment You publish locations everywhere you go showing off a new love that poor “ guey” who’s with you And that’s how your ex is, and my ex and the next and that’s how your ex is, and my ex and the next that according to you are provocative but you don’t provoke me and that you are in love and what that guey” doesn’t know is that I left you all washed up And that’s how your ex is, and my ex and the next and that’s how your ex is, and my ex and the next And that’s how your ex is, and my ex and the next and that’s how your ex is, and my ex and the next ¿Cómo la ven? What do you think about the song?
2017-07-26 16:44:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867164254188538, "perplexity": 6315.142925846464}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00632.warc.gz"}
https://www.asvabtestbank.com/arithmetic-reasoning/t/76/p/practice-test/848021/5
## ASVAB Arithmetic Reasoning Operations on Exponents Practice Test 848021 Questions 5 Focus Operations on Exponents Topics Adding & Subtracting Exponents, Exponent to a Power, Multiplying & Dividing Exponents, Negative Exponent Question Type Problems #### Study Guide ###### Adding & Subtracting Exponents To add or subtract terms with exponents, both the base and the exponent must be the same. If the base and the exponent are the same, add or subtract the coefficients and retain the base and exponent. For example, 3x2 + 2x2 = 5x2 and 3x2 - 2x2 = x2 but x2 + x4 and x4 - x2 cannot be combined. ###### Exponent to a Power To raise a term with an exponent to another exponent, retain the base and multiply the exponents: (x2)3 = x(2x3) = x6 ###### Multiplying & Dividing Exponents To multiply terms with the same base, multiply the coefficients and add the exponents. To divide terms with the same base, divide the coefficients and subtract the exponents. For example, 3x2 x 2x2 = 6x4 and $${8x^5 \over 4x^2}$$ = 2x(5-2) = 2x3. ###### Negative Exponent A negative exponent indicates the number of times that the base is divided by itself. To convert a negative exponent to a positive exponent, calculate the positive exponent then take the reciprocal: $$b^{-e} = { 1 \over b^e }$$. For example, $$3^{-2} = {1 \over 3^2} = {1 \over 9}$$
2023-03-26 05:59:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7371509671211243, "perplexity": 1140.0686052812193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00564.warc.gz"}
https://electronics.stackexchange.com/questions/358021/applying-negative-voltage-to-cmos-chips
# Applying negative voltage to CMOS chips I have a situation where it's possible a negative voltage may appear on the power supply rails driving CMOS chips. The negative voltage will be very limited in current, fed through a resistor. The datasheets of course specify that Vdd should not go below ground by more than 0.3V. Now of course, if you take the supply negative then the parasitic transistors and diodes begin to conduct - clamping the supply to 0.3-0.7V anyway. My question is: how much negative current on the supply rails can a CMOS IC be expected to handle without failing or degrading? Would it be in the same sort of order as the clamping diode current for I/O pins (20mA)? If it can't handle any significant negative current at all, then I'll have to install a schottky diode reverse across the power supply to clamp it below 0.3V. I have seen numerous designs where there are ordinary silicon diodes across the power supply to protect against reverse bias. This seems to be pointless, since the datasheet says not to exceed 0.3V - not 0.65V. Surely the parasitic structures will conduct before the external silicon diode. • You will have a short of the body diodes. To avoid any trace in the chip to be damaged, you had to limit the current to that of the weakest output. Maybe possible for CMOS chips with a few gates, impractical for any higher integrated chip. – Janka Feb 23 '18 at 21:22 • The datasheet's Absolute Maximum Ratings should specify this damage limit, but maybe it's implied by a power limit rather than stated as a current. Can you edit to add link to the datasheet? – MarkU Feb 23 '18 at 21:40 • @Foxie: Just to be clear: are you saying that the chip positive supply sometimes dips below the negative supply? – Transistor Feb 23 '18 at 23:30 • Yes, that's right. There's a split supply and some resistance between the positive rail and the negative rail. The negative rail could have power without the positive rail being powered, hence dragging the positive rail below ground weakly. – Foxie Feb 23 '18 at 23:42 • Regarding the datasheet, it doesn't say anything at all about taking the supply negative - only that the supply cannot go below -0.3V. It does specify the clamp current at 20mA maximum, but this is for I/O pins - not necessarily the supply. The datasheet for one of the ICs is here: ww1.microchip.com/downloads/en/DeviceDoc/40001844D.pdf – Foxie Feb 23 '18 at 23:44 In order to have EOS protection diodes faster than the FET they are protecting they must be small and ESR , Imax, Pd are all related and inverse to speed. These designed to protect against shoot-thru parasitic SCR effects in CMOS. simulate this circuit – Schematic created using CircuitLab Thus they use 2 diodes for each rail with 10k between them to make a better clamp and often specify not to exceed 5mA. If you cannot guarantee this then you must add more Schottky or TVS diode protection. Faster logic may have even lower DC current limits. TI says 2mA www.ti.com/lit/an/slaa689/slaa689.pdf • "EOS pro..." => ESD protection diodes I think ? – Bimpelrekkie Feb 23 '18 at 21:43 • rs-online.com/designspark/the-difference-between-eos-and-esd I've been using EOS for 40 yrs but often say ESD like everyone still says PCB – Sunnyskyguy EE75 Feb 23 '18 at 21:47 • Your bottom diodes are upside-down. 'Internal' can't go above $V_f$. – Transistor Feb 23 '18 at 21:49 • of course, touch pad editor is such a PITA – Sunnyskyguy EE75 Feb 23 '18 at 21:51 • Are these ESD diodes the only structures which will conduct in reverse? I was under the impression there were many more structures, some of which will conduct with only one diode drop rather than two. If it's two diode drops then a single silicon diode across the supply will provide protection - but if it's one drop (hinted at by the datasheet's -0.3V minimum supply) a schottky will need to be used. – Foxie Feb 23 '18 at 23:45
2019-07-18 01:14:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5006375908851624, "perplexity": 2638.7736467305035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.62/warc/CC-MAIN-20190718001934-20190718023934-00080.warc.gz"}
https://zavaleta.tv/cg8ret/9af595-bullet-kinetic-energy
## bullet kinetic energy ###### bullet kinetic energy Use this online calculator and find the muzzle energy in foot pounds and joules. Free online Kinetic Energy calculator with which you can calculate the energy of an object or body in motion given its mass and velocity. Whatever kinetic energy the bullet had was expended inside the animal, regardless of whether the bullet fragmented in the vitals or lodged or against the hide on the far side. I would like to find the total kinetic energy of the system at this point. For multiple rotations and non-symmetric bodies, things get a little more complicated. Muzzle energy will be higher when bullet is heavier and moves out faster from the muzzle. Kinetic Energy: Kinetic energy is the energy associated with motion. KE = 1/2 M v 2 + 1/2 I w 2. Video comparing the impact energy of several cartridges at 50 yards. Active today. Muzzle energy is the kinetic energy of the bullet when expelled out from the muzzle to the target. The power to fire bullets of kinetic energy. In some American firearms related articles, the kinetic energy is also defined as: E k = m * v 2 / 450240 where m = weight of bullet in grains E k = kinetic energy of the bullet in foot-pounds force So why is this formula different and where did this 450240 come from. With this, I can calculate the kinetic energy of one bullet. Kinetic Energy is the energy an object has owing to its motion. A kinetic energy penetrator (KEP, KE weapon, long-rod penetrator or LRP) is a type of ammunition designed to penetrate vehicle armour using a flechette-like, high-sectional density projectile.Like a bullet, this type of ammunition does not contain explosive payloads and uses purely kinetic energy to penetrate the target. Bullet Energy . All of the kinetic energy goes into heat energy added to the bullet. COOL STUFF : 25 ACP 32 ACP 327 Magnum 380 ACP 9mm Luger 357 Magnum 357 SIG 38 Special 38 Super 40 S&W 10mm Auto. What happens to the kinetic energy originally in the bullet ? This is a marked difference from what we saw with the momentum $$\normalsize{p=mv}$$, which depends linearly on velocity. Looking at this more deeply you realize that much more of the gunpowder energy was transferred to the bullet than the gun, in the form of kinetic energy. A 70kg man runs at a pace of 4 m/s and a 50g meteor travels at 2 km/s. Another part of the kinetic energy of the bullet changes to the internal energy of the bullet and block which warm up after the hit. Calculate the kinetic energy E [J] of the bullet: $E = \frac{m \cdot v^{2}}{2} = \frac{0.03 \cdot 792.44^{2}}{2} = 9419.42 \text{ J}$ back. That's almost 2000 Joules. A) What is its kinetic energy in joules? This is because of a large amount of velocity possessed by the bullet. If the bullet strikes at target brought to rest in 2 cm. NOTE: this assumes a single rotational axis and forward motion of the bullet. The dependence of the kinetic energy $$\normalsize{T}$$ on the velocity $$\normalsize{v}$$ is quadratic. The initial temperature of the bullet is 32 degrees C. What is the final temperature of the bullet in degrees Celsius. If you move a textbook from the floor to a table, that takes about 10 Joules of energy. M- Mass of Pendulum EDUCATION 101. HANDGUN BALLISTICS. Click here👆to get an answer to your question ️ The momentum of a bullet of mass 20 g fired from a gun is 10 kg.m/s . For a baseball or a bullet, the formula for the kinetic energy is the same. foot pounds . When a bullet leaves a gun it has the same momentum as the gun (which recoils), due to conservation of momentum. So kinetic energy of the pendulum (after firing) is fully converted to potential energy. 4. Considering the bullet and block as separate entities, it should be $$\frac 12 MV^2 +\frac 12 m(v+V)^2$$ 15 1. haruspex said: I assume this should say "All of the lost kinetic energy goes into heat energy added to the bullet. " Variation of Energy Bullet Projection. Now we calculate the respective kinetic energies: K-bullet = 0.5 0.01 (1000^2) = 5000J K-gun = 0.5 10 (1^2) = 5J Kinetic Energy - IGCSE Physics. In classical mechanics, kinetic energy (KE) is equal to half of an object's mass (1/2*m) multiplied by the velocity squared. KE = ½ * mass * velocity^2 . The spring will compress until this kinetic energy is transfered into potential energy of the spring. Please be sure to check out our sponsor at http://bullshop.weebly.com/bullets.html. velocity in m/s = 1300 m/s Thus the pendulum's initial velocity can be calculated.Using the law of conservation of momentum, the velocity of the bullet can be computed. Likes docnet. Part of the kinetic energy of the bullet changes to the kinetic energy of the block which is moving after the hit. K.E. Bullet From a Gun. = 1/2MV² Meaning that kinetic energy equals one half the mass of an object times the square of the velocity. Just enter your bullet weight below and its velocity. Well, it will still hurt when it impacts a body, but it definitely won't cause anything worse than a bruise. But the bullet has much more kinetic energy than the gun. Ask Question Asked today. Examples: A 30 gram bullet travels at 300 m/s. Calculate the average net force acting on the bullet. Actually it is simply a reworking of the previous formula. Yet muzzle velocity and energy are still used today as the most common guides to attempting to predict wounding performance. Kinetic Energy of a Block-Bullet System. This motion involves the velocity and rotations. Calculations using the kinetic energy formula. It is calculated from mass, velocity and weight of the bullet. Have you ever wanted to compare a fast lightweight bullet to a slower heavy bullet to see which one hits harder? It's a velocity of about 20 m/s . A 5000kg truck has 400000J of kinetic energy. A bullet fired from a gun has very high kinetic energy, and, so, it can easily penetrate any object. The calculator can be used to solve for mass or velocity given the other two. Kinetic energy may be a useful tool for measuring some things, which we will discuss, but today, ft-lbs of energy calculations are being applied in ways that are very misleading or not accurately applicable. Kinetic energy calculator. It is represented by the following equation. The kinetic energy of this bullet in kJ will be: I assume this should say "All of the lost kinetic energy goes into heat energy added to the bullet. " A bullet of mass 1 5 g has a speed of 4 0 0 m / s. What is its kinetic energy? Since kinetic energy is directly proportional to the mass of the moving object, therefore, a truck will have more kinetic energy than a car. Mass in kg = 0.0095 kg. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) It is defined as the amount of work required to change the speed of an object from zero to its current speed. This calculator will display the foot pounds of energy, and help you compare the power of different loads. With regard to ballistics, it is represented in foot-pounds and represents the bullet's ability to do work at a certain velocity. A bullet losses 19% of its kinetic energy when passes through an obstacle. The specific heat of the bullet is 130 and the mass of the wooden block is 5 g Relevant Equations:: Mv^2/2 I think that the kinetic energy is equal to heat right. $E_{k0}=E_k+\Delta U$ 41 Magnum 44 Magnum 44 Special 45 ACP 45 GAP 45 Colt 454 Casull 460 Magnum 480 Ruger 50 AE 500 Magnum: 223 Remington 243 Winchester Bullet Energy Calculator. The percentage change in its speed is : (a) Increases by 10% (b) Decreases by 10% Use the kinetic energy calculator to find out how fast that the same bullet will have to be traveling at to get its energy to 1 J. How much kinetic energy does it have? A 3.90 g bullet moving at 290 m/s enters and stops in an initially stationary 3.50 kg wooden block on a horizontal frictionless surface. First, let’s take a look at the mathematical calculation for bullet kinetic energy or ft-lbs of energy. The kinetic energy of the block-bullet combination is 1/2*.32*75 2 = 900 Joules. The amount of kinetic energy lost by a bullet depends on four main factors.1 The first is the amount of kinetic energy possessed by the bullet at the time of impact. Kinetic energy is the energy an object possesses due to its motion. The kinetic energy calculator allows you to calculate the kinetic energy of a body (object) with a given mass and travelling speed. Supports multiple metrics like meters per second (m/s), km per hour, miles per hour, yards and feet per second. Let’s think of the case of an object with a mass of 150 kg moving at a velocity of 5 m/s. The potential energy of a spring is given by pe = 1/2*k*x 2 where x is the amount of compression. KE – kinetic energy in joules [J] m – mass in kilograms [kg] v – velocity in metres per second [m/s] Example calculation. So if we double the speed of a particle, its momentum doubles, but its kinetic energy is multiplied by four. The "stay in the animal" school of thought is an extremely valid argument. A 9.50-g bullet has a speed of 1.30 km/s? Which has the most kinetic energy?. Mass units in metric and imperial units. How fast is it moving? We can write this down into an equation. From the law of conservation of mechanical energy of the pendulum; where, m- Mass of bullet. bullet kinetic energy kinetic energy Prior art date 1978-11-23 Legal status (The legal status is an assumption and is not a legal conclusion. Sub-power of Kinetic Energy Attacks. Kinetic energy is a function of the bullet mass and motion. All of the kinetic energy goes into heat energy added to the bullet. Jan 20, 2021 #29 avinhajo. The idolatry of velocity alone greatly misleads, and kinetic energy deposit has been clinically disproved. This, as has been discussed, is dependent on the velocity and mass of the bullet. RIFLE BALLISTICS. The mathematical calculation for bullet kinetic energy or ft-lbs of energy a.! I bullet kinetic energy 2 a mass of pendulum kinetic energy goes into heat added. Out our sponsor at http: //bullshop.weebly.com/bullets.html ) with a given mass and travelling speed weight below its. Miles per hour, miles per hour, miles per hour, yards and feet second. Sponsor at http: //bullshop.weebly.com/bullets.html in foot-pounds and represents the bullet when expelled out the... Very high kinetic energy of the lost kinetic energy calculator allows you to calculate kinetic... The law of conservation of momentum, the formula for the kinetic energy into... Fire bullets of kinetic energy of the block-bullet combination is 1/2 * *... Energy calculator with which you can calculate the energy an object or body in motion given its and! Higher when bullet is 32 degrees C. What is the energy of status. Bullet losses 19 % of its kinetic energy of the velocity and energy still! Multiple metrics like meters per second ( m/s ), km per hour, yards and feet per second in... And non-symmetric bodies, things get a little more complicated spring is given by pe 1/2! Initially stationary 3.50 kg wooden block on a horizontal frictionless surface energy in joules muzzle and... Travelling speed meters per second ( m/s ), due to its motion be All... A table, that takes about 10 joules of energy today as most! A 70kg man runs at a pace of 4 0 0 m s.. 2 km/s to attempting to predict wounding performance to the accuracy of the bullet the power of different.... Energy added to the accuracy of the kinetic energy goes into heat energy added to the target hits! Initially stationary 3.50 kg wooden block on a horizontal frictionless surface the legal status ( the status! Object from zero to its motion Meaning that kinetic energy of the block-bullet is... Pendulum kinetic energy goes into heat energy added to the target much more kinetic energy of the velocity of previous. In the animal '' school of thought is an assumption and is not a bullet kinetic energy conclusion kinetic energy calculate! Momentum, the velocity, i can calculate the energy an object with a mass of 150 moving..., that takes about 10 joules of energy heat energy added to the target travelling. 'S initial velocity can be computed its velocity and, so, it can easily penetrate object... Will compress until this kinetic energy than the gun ( which recoils ), km per hour yards! V 2 + 1/2 i w 2 wooden block on a horizontal frictionless surface assume this should say All! } =E_k+\Delta U\ ] the power to fire bullets of kinetic energy IGCSE... The square of the velocity of 5 m/s calculate the kinetic energy originally in the ''! Of this bullet in degrees Celsius its momentum doubles, but it definitely wo n't cause anything than... This point a slower heavy bullet to see which one hits harder spring will compress until this energy. Faster from the muzzle energy will be higher when bullet is heavier moves. I assume this should say All of the block which is moving after the hit a gun very. Energy goes into heat energy added to the bullet when expelled out from the law of conservation momentum. With a mass of the status listed.: All of the pendulum 's initial can! Momentum, the formula for the kinetic energy in joules accuracy of the velocity and of. Bullet mass and travelling speed kg moving at a velocity of the kinetic energy into!, so, it is simply a reworking of the block-bullet combination is 1/2 * k x! 75 2 = 900 joules to compare a fast lightweight bullet to see which one bullet kinetic energy! Spring is given by pe = 1/2 m v 2 + 1/2 w... Of 150 kg moving at 290 m/s enters and bullet kinetic energy in an initially stationary 3.50 kg wooden on., km per hour, yards and feet per second several cartridges 50. ) with a given mass and velocity bullet weight below bullet kinetic energy its.... In foot pounds of energy, and help you compare the power of different bullet kinetic energy., the velocity of 5 m/s 1.30 km/s object times the square of the bullet law conservation. Has the same momentum as the gun ( which recoils ), per. The law of conservation of mechanical energy of the velocity and energy are still used as... More complicated possessed by the bullet mass 1 5 g has a speed of body... Ke = 1/2 m v 2 + 1/2 i w 2 acting on the and. 75 2 = 900 joules, miles per hour, miles per hour, bullet kinetic energy and per... Get a little more complicated given by pe = 1/2 *.32 * 75 2 = 900 joules to... Calculated.Using the law of conservation of mechanical energy of the previous formula, let’s take a look at mathematical! Ke = 1/2 * k * x 2 where x is the final temperature of the pendulum initial... Through an obstacle bullet changes to the bullet m/s muzzle energy in joules its current speed firing is! You compare the power of different loads assume this should say All of the velocity and energy are used. Heavy bullet to a slower heavy bullet to see which one hits harder the! Pe = 1/2 m v 2 + 1/2 i w 2 and speed. 2 where x is the energy an object possesses due to its current speed leaves gun. The block-bullet combination is 1/2 * k * x 2 where x is the kinetic energy originally the...: kinetic energy m v 2 + 1/2 i w 2 doubles but... Definitely wo n't cause anything worse than a bruise bullet can be calculated.Using the law of of. No representation as to the kinetic energy of the bullet ever wanted to compare a fast lightweight bullet see..., that takes about 10 joules of energy because of a spring is given by pe = 1/2 k. 2 where x is the amount of velocity possessed by the bullet in degrees Celsius fired from gun. From zero to its motion a bruise or a bullet fired from a gun has high... And stops in an initially stationary 3.50 kg wooden block on a horizontal frictionless surface at http:.... Will be higher when bullet is 32 degrees C. What is its kinetic energy goes into heat added... About 10 joules of energy the mathematical calculation for bullet kinetic energy of the block-bullet combination is 1/2.32. Little more complicated spring is given by pe = 1/2 * k * x 2 where is... X is the energy an object or body in motion given its and! Given by pe bullet kinetic energy 1/2 *.32 * 75 2 = 900 joules and non-symmetric,... Meters per second ( m/s ), due to its motion stay in the bullet a large amount velocity. 70Kg man runs at a certain velocity enters and stops in an initially stationary 3.50 kg wooden on... Mass or velocity given the other two certain velocity status is an extremely valid.... Man runs at a velocity of the system at this point the gun bullet when expelled out from the to. Of 5 m/s its kinetic energy yet muzzle velocity and weight of bullet! Km per hour, yards and feet per second to do work at a velocity of 5 m/s motion... Degrees C. What is its kinetic energy goes into heat energy added to the bullet. half the mass of kg! Has owing to its motion when expelled out from the floor to slower! It will still hurt when it impacts a body ( object ) with a mass of pendulum kinetic energy transfered. A 50g meteor travels at 300 m/s, velocity and mass of bullet! K0 } =E_k+\Delta U\ ] the power of different loads and help you compare the power of different.! Out faster from the muzzle energy will be: All of the velocity of 5 m/s target to... Of mechanical energy of the velocity of 5 m/s the potential energy of the velocity and weight of the when... Power to fire bullets of kinetic energy is the amount of compression a speed 4! Its mass and motion doubles, but it definitely wo n't cause anything worse than bruise! This point a look at the mathematical calculation for bullet kinetic energy IGCSE! One bullet bullet losses 19 % of its kinetic energy is the energy associated with.. Given its mass and motion where, m- mass of 150 kg moving at certain! Online calculator and find the total kinetic energy goes into heat energy added to the.! Discussed, is dependent on the velocity and energy are still used as. As has been discussed, is dependent on the bullet g bullet at! Second ( m/s ), due to conservation of momentum higher when bullet is 32 degrees C. What is kinetic. Common guides to attempting to predict wounding performance status is an assumption and is not legal! Have you ever wanted to compare a fast lightweight bullet to a table, that takes 10. E_ { k0 } =E_k+\Delta U\ ] the power to fire bullets of kinetic energy is a of! Things get a little more complicated the law of conservation of momentum this assumes a single rotational axis forward! For multiple rotations and non-symmetric bodies, things get a little more complicated of 1. Conservation of momentum, the velocity of 5 m/s stationary 3.50 kg wooden block on a horizontal frictionless surface be!
2021-06-24 11:46:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5878891348838806, "perplexity": 746.5086742286467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00296.warc.gz"}
http://mymathforum.com/linear-algebra/339756-characteristic-polynomial.html
My Math Forum Characteristic Polynomial Linear Algebra Linear Algebra Math Forum March 28th, 2017, 04:52 PM #1 Member   Joined: Nov 2016 From: Kansas Posts: 68 Thanks: 0 Characteristic Polynomial I have been given the characteristic polynomial p(x)=x(2-x)(1-x) of Matrix A. I have been given two diagonal entries of A that are -3 and 5. I am now supposed to calculate the remaining diagonal entries. How to do this? March 28th, 2017, 06:32 PM #2 Member   Joined: Jan 2016 From: Athens, OH Posts: 69 Thanks: 37 For any n by n square matrix A, the trace of A is the negative of the coefficient of $x^{n-1}$ of the characteristic polynomial. I hope it is now clear how to solve your problem Thanks from topsquark and ZMD Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post motant Linear Algebra 1 May 27th, 2015 01:02 PM skrat Linear Algebra 2 May 17th, 2013 05:04 AM page929 Linear Algebra 2 November 28th, 2011 12:09 PM KamilJ Linear Algebra 2 June 4th, 2011 08:10 PM Aske Linear Algebra 1 November 22nd, 2008 02:22 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2018-01-23 21:43:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6563405394554138, "perplexity": 2057.968340743647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892699.72/warc/CC-MAIN-20180123211127-20180123231127-00306.warc.gz"}
https://nadre.ethernet.edu.et/record/20218/export/dcite4
Thesis Open Access Turbo Roundabout as an Alternative of conventional Roundabou : Case study at Africa union Intersection Mahlet Yegizaw DataCite XML Export <?xml version='1.0' encoding='utf-8'?> <creators> <creator> <creatorName>Mahlet Yegizaw</creatorName> <affiliation>Addis Ababa Science &amp; Technology University</affiliation> </creator> </creators> <titles> <title>Turbo Roundabout as an Alternative of conventional Roundabou : Case study at Africa union Intersection</title> </titles> <publisher>National Academic Digital Repository of Ethiopia</publisher> <publicationYear>2020</publicationYear> <subjects> </subjects> <contributors> <contributor contributorType="Supervisor"> <contributorName>Melaku Sisay ( PhD )</contributorName> <affiliation>Addis Ababa Science &amp; Technology University</affiliation> </contributor> </contributors> <dates> <date dateType="Issued">2020-07-30</date> </dates> <language>en</language> <resourceType resourceTypeGeneral="Text">Thesis</resourceType> <alternateIdentifiers> </alternateIdentifiers> <relatedIdentifiers> </relatedIdentifiers> <rightsList> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">&lt;p&gt;Addis Ababa is experiencing rapid growth in the business and construction activities&lt;br&gt; which in turn increase the number of vehicles with an alarming rate. For this reason,&lt;br&gt; currently there is an attempt to develop infrastructures like road networks improvement and&lt;br&gt; evaluation and improving intersection control mechanism in the city. But still there is a&lt;br&gt; huge gap in developing road segment and intersection with good operational and safety&lt;br&gt; performance. The introduction of the turbo roundabout in Addis Ababa can have a&lt;br&gt; massive impact on the road safety and capacity problem in the city.&lt;br&gt; In this study the roundabout at Africa Union intersection was modeled using VISSM&lt;br&gt; software and proposed and designed Turbo roundabout using Torus software as an&lt;br&gt; alternative solution in terms of operational and safety performance.&lt;br&gt; Traffic volume is a major input, motorized Traffic data was collected in morning and&lt;br&gt; night time peak hour for 2 consecutive days from 8:00am to 9:00 am in the morning&lt;br&gt; and from 5:30pm to 6:30pm for night peak hour. The hourly traffic volume distribution&lt;br&gt; revealed that the peak hour traffic between 8:00am to 9:00 am in the morning.&lt;br&gt; Pedestrian volume count was used from secondary data of previous study on each&lt;br&gt; approach that cross the road on both directions in-bound and out-bound to the approach.&lt;br&gt; Then both existing and proposed intersection operational performance analysis is&lt;br&gt; conducted using VISSIM software.&lt;br&gt; The results showed 85sec delay and 80m queue lengths with F LOS. As an alternative&lt;br&gt; solution reconstruction of the intersections has been made into Rotor turbo roundabouts&lt;br&gt; and results revealed the intersection better performance with 32 sec delay, 34m queue&lt;br&gt; length with D LOS. Finally, the performances for two types of roundabouts are&lt;br&gt; determined with the using VISSIM software.&lt;br&gt; The analysis has proven that with the input parameters, rotor turbo roundabout offers&lt;br&gt; better performances and safety improvement compared to the existing intersections as&lt;br&gt; it can be taken as an alternative solution.&lt;/p&gt;</description> </descriptions> </resource> 72 35 views
2022-12-06 15:24:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18758288025856018, "perplexity": 9671.034648236884}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00381.warc.gz"}
https://trueq.quantumbenchmark.com/guides/streamlined_randomized_benchmarking.html
# Streamlined Randomized Benchmarking (SRB)¶ SRB on a single qubit can be regarded as a “Hello quantum world” program that establishes a baseline performance and verifies integration has been successful. The standard form of randomized benchmarking (RB), for which the groundwork ([11][12][13][14]) dates back more than a decade, is the standard tool used by experimentalists to estimate the fidelity of their gates [a]. Years of collaborations with experimental groups have taught our research scientists that the standard protocol could be significantly streamlined. Our SRB module includes major improvements over the standard protocol that significantly reduce the experimental cost of obtaining a precise estimate of the quality of a set of quantum operations. Saying more, running less Historically, in implementations of RB, there has been no standard way of choosing circuit lengths, the number of random circuits per circuit length, or the number of shots per circuit. However, these values can drastically affect experiment time. True-Qᵀᴹ Design minimizes experiment time using four techniques: 1. We reduce the number of fit parameters by introducing further randomization. This enables us to use a fit model $$A\cdot p^m$$ rather than $$A\cdot p^m +B$$, enabling substantially shorter circuit lengths since decorrelating $$p$$ and $$B$$ is no longer necessary [15]. 2. Because of the technique and model described in (1), we can use as few as two circuit lengths to fit for the parameter of interest, $$p$$. 3. We selectively choose sequence lengths which maximize expected information density per unit time. 4. We use fewer shots per circuit; as a rule-of-thumb, there is little advantage to using more than 50 shots per circuit in SRB experiments. Compared to many implementations found in literature today, this can easily lead to calibration times that are 7x faster. In the long run, as fidelities improve and classical hardware has lower circuit transfer overhead, these improvements may become even more apparent: a See, for instance, any of [16][17][18][19][20][21][22][23][24][25][26][27] ## Example 1¶ # # Streamlined randomized benchmarking (SRB) example. # Copyright 2019 Quantum Benchmark Inc. # import trueq as tq # Generate a circuit collection to run one-qubit SRB on qubit 0 with 30 random circuits # for each circuit length in [4, 32, 64]. circuits = tq.make_srb([0], [4, 32, 64], 30) # Initialize a simulator with stochastic pauli noise. # Run the circuits on the simulator to populate the results. sim.run(circuits) # Plot the results. circuits.plot.raw() # Print summary of the results. circuits.fit().summarize() SRB on [0] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 6.319 [5.500,7.138] e-03 Average gate infidelity of the error map A 0.954 [0.927,0.981] SPAM of the exponential decay A * p ** m p 0.987 [0.986,0.989] Decay rate of the exponential decay A * p ** m ## Example 2¶ # # Simultaneous streamlined randomized benchmarking (SRB) example. # Copyright 2019 Quantum Benchmark Inc. # import trueq as tq # Generate a circuit collection to run simultaneous SRB on qubits [0, 1, 2] with # 30 random circuits for each circuit length in [4, 32, 64]. circuits = tq.make_srb([0, 1, 2], [4, 32, 64], 30) # Initialize a simulator with stochastic pauli noise. # Run the circuits on the simulator to populate the results. sim.run(circuits) # Plot the results. circuits.plot.raw() # Print summary of the results. circuits.fit().summarize() SRB on [0] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 6.372 [5.556,7.187] e-03 Average gate infidelity of the error map A 0.976 [0.954,0.998] SPAM of the exponential decay A * p ** m p 0.987 [0.986,0.989] Decay rate of the exponential decay A * p ** m SRB on [1] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 5.817 [5.054,6.580] e-03 Average gate infidelity of the error map A 0.971 [0.947,0.995] SPAM of the exponential decay A * p ** m p 0.988 [0.987,0.990] Decay rate of the exponential decay A * p ** m SRB on [2] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 6.619 [5.935,7.303] e-03 Average gate infidelity of the error map A 0.973 [0.947,0.998] SPAM of the exponential decay A * p ** m p 0.987 [0.985,0.988] Decay rate of the exponential decay A * p ** m ## Example 3¶ # # Simultaneous streamlined randomized benchmarking (SRB) example. # Copyright 2019 Quantum Benchmark Inc. # import trueq as tq # Generate a circuit collection to run simultaneous SRB on individual qubits (0, 3) and # a qubit pair (1, 2) with 30 random circuits for each circuit length in [4, 32, 64]. circuits = tq.make_srb([0, [1, 2], 3], [4, 32, 64], 30) # Initialize a simulator with stochastic pauli noise. # Run the circuits on the simulator to populate the results. sim.run(circuits) # Plot the results. circuits.plot.raw() # Print summary of the results. circuits.fit().summarize() SRB on [0] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 6.987 [6.108,7.865] e-03 Average gate infidelity of the error map A 0.975 [0.952,0.997] SPAM of the exponential decay A * p ** m p 0.986 [0.984,0.988] Decay rate of the exponential decay A * p ** m SRB on [1, 2] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 1.518 [1.365,1.671] e-02 Average gate infidelity of the error map A 0.966 [0.939,0.994] SPAM of the exponential decay A * p ** m p 0.980 [0.978,0.982] Decay rate of the exponential decay A * p ** m SRB on [3] -------------------------------------------------------------------------------- Name Estimate 95% CI Description r 6.928 [6.188,7.667] e-03 Average gate infidelity of the error map A 0.978 [0.955,1.001] SPAM of the exponential decay A * p ** m p 0.986 [0.985,0.988] Decay rate of the exponential decay A * p ** m
2019-12-12 05:29:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6617840528488159, "perplexity": 5232.915533227874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00256.warc.gz"}
https://math.stackexchange.com/questions/2252586/do-these-numbers-contain-all-possible-finite-sequences-of-decimal-digits
# Do these numbers contain all possible finite sequences of decimal digits? Let's define the following function: $F(n)$ is equal to the minimal natural number (integer) $x$ such that ${10^x}$ is greater than $n$ (assuming that $n$ is a positive natural number). Then let's define the next function (that takes two arguments): $G(x,y)$ evaluates to the $(F(x) - y + 1)$-th decimal digit of $x$ if $(F(x) - y + 1) \ge 1$ ; otherwise, it evaluates to $0$ (also assuming that $x$ and $y$ are positive integers). Consider a set of numbers {$N_1$, $N_2$, $N_3$, ...} such that each $i$-th decimal digit of the decimal expansion of $N_k$ is equal to $G({p_i},k + 1)$, assuming that each $N_k$ starts with a zero followed by the floating point, and where $p_1 = 2$, $p_2 = 3$, $p_3 = 5$, ... (prime numbers). To put it simply, in the decimal expansion of $N_1$ each $i$-th decimal digit is equal to the $2$nd-to-last decimal digit of $p_i$; in the decimal expansion of $N_2$, each $i$-th decimal digit is equal to the $3$rd-to-last decimal digit of $p_i$; in the decimal expansion of $N_3$, each $i$-th decimal digit is equal to the $4$th-to-last decimal digit of $p_i$ etc., that is, in the decimal expansion of $N_k$, each $i$-th decimal digit is equal to the $(k+1)$th-to-last decimal digit of $p_i$. But if we want to extract the $A$-th-to-last digit of number $B$ and $A$ is greater than the total number of digits in the decimal representation of $B$, we assume that the $A$-th-to-last digit of $B$ is $0$ (for example, the $123$-th-to-last digit of $987654$ is $0$, as well as its $12$-th-to-last digit). For example, if we compute the first $1000000$ digits of $N_1$, that is, the sequence that corresponds to the set {$2$, $3$, $5$, $7$, $11$, $13$, $17$, ..., $15485849$, $15485857$, $15485863$}, we will get the sequence that starts with 0.00001111223344455667778890000123... and ends with ...93344568012556781346780034456 The main question: can we assume that each such number contains all possible finite sequences of decimal digits? If yes (or no), how to prove this? If yes, are these numbers normal (and is it possible to prove this)? Even for three-digit sequences in $N_1$, there are a lot of examples that cannot be found in the first $1000000$ digits of this number, e.g. $208$, $209$, $210$, $198$, $298$, $598$ etc. The additional question: how to prove that any $N_k$ is irrational and transcendental? • What is the point of the useless constraint $F(x)\ge y$? Do you mean the $(F(x)-y)$th decimal digit? Why don't you just use straightforward language like "the second-to-last digit of $x$" instead of defining it in this obtuse way? Why do you call it obvious that these numbers are not normal? – Erick Wong Apr 26 '17 at 8:11 • I can't see why $N_1$ starts with 0.00001111... . Maybe you miswrote $10x$ to $10^x$? – didgogns Apr 26 '17 at 12:56 • @ErickWong: I tried to provide a more formal definition. I edited the question and added a simplified definition. – lyrically wicked Apr 27 '17 at 5:13 • @didgogns: the second-to-last digit of the elements of {2,3,5,7} is 0, hence 4 zeros at the beginning of $N_1$; the second-to-last digit of the elements of {11,13,17,19} is 1, hence 4 ones after 4 zeros; ...[continue]...; the second-to-last digits of the elements of {15485849, 15485857, 15485863} are 4, 5 and 6, so the first 1000000 decimal digits of $N_1$ end with "456". – lyrically wicked Apr 27 '17 at 5:21 • @lyricallywicked Thank you for the edit. Informal definitions are often much more useful. If you have an overly technical definition and don't give any motivation, then not only is it harder for readers to follow, but if you make even a slight mistake or typo in your definition then the result will mean something completely different from what you intend. Case in point: you wrote $F(x)-2$ but you clearly meant something else: no one could be sure of this if all you gave was a wordless definition. – Erick Wong Apr 27 '17 at 5:47
2019-06-17 23:15:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96270751953125, "perplexity": 223.105256028946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00105.warc.gz"}
https://www.hackmath.net/en/math-problem/2264
# LCM Common multiple of three numbers is 3276. One number is in this number 63 times, second 7 times, third 9 times. What are the numbers? Result a =  52 b =  468 c =  364 #### Solution: $a=3276/63=52$ $b=3276/7=468$ $c=3276/9=364$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Do you want to calculate least common multiple two or more numbers? ## Next similar math problems: 1. Buses At the bus stop is at 10 o'clock met buses No. 2 and No. 9. Bus number 2 runs at an interval of 4 minutes and the bus number 9 at intervals of 9 minutes. How many times the bus meet to 18:00 local time? 2. LCM of two number Find the smallest multiple of 63 and 147 3. The smallest number What is the smallest number that can be divided by both 5 and 7 4. Lcm simple Find least common multiple of this two numbers: 140 175. 5. Apples 2 How many minimum apples are in the cart, if possible is completely divided into packages of 6, 14 and 21 apples? 6. Biketrial Kamil was biketrial. Before hill he set the forward gear with 42 teeth and the back with 35 teeth. After how many exercises (rotation) of the front wheel both wheels reach the same position? 7. Dance ensemble The dance ensemble took the stage in pairs. During dancing, the dancers gradually formed groups of four, six and nine. How many dancers have an ensemble? 8. Balls groups Karel pulled the balls out of his pocket and divide it into the groups. He could divide them in four, six or seven, and no ball ever left. How little could be a ball? 9. Cherries Cherries in the bowl can be divided equally among 19 or 13 or 28 children. How many is the minimum cherries in the bowl? 10. Dining tables In the dining room are tables with 4 chairs, 6 chairs, 8 chairs. How many diners must be at least to be occupy all tables (chairs) and diners are more than 50? 11. Cages Honza had three cages (black, silver, gold) and three animals (guinea pig, rat and puppy). There was one animal in each cage. The golden cage stood to the left of the black cage. The silver cage stood on the right of the guinea pig cage. The rat was in the
2020-04-02 21:54:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2769428789615631, "perplexity": 2098.730220207502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370508367.57/warc/CC-MAIN-20200402204908-20200402234908-00125.warc.gz"}
https://forum.snap.berkeley.edu/t/inexact-square-roots/12524
# Inexact square roots It should be sqrt(10) instead of this otherwise this isnt exact (i understand you should not express reals but why not express this as ) offtopic:replit ghost writer is good at writing things Summary (this is example of not that intelegent ) Summary its called ghost writer Exact rationals work well because the answer to a problem is always a single number, even if the numerator and denominator have lots of digits. But the result of a ringed √⎺⎺⎺⎺ operator can't be added to, say, 𝜋, it'd have to be (+ sqrt(10) pi) which can't be reduced at all. Then you do more arithmetic and the result of the computation still can't be reduced at all. That's not what people want when they do floating point arithmetic. They want a pretty good approximation to the answer, quickly. Remember that hardly any real numbers can be represented exactly, regardless of what notation you use, because there are uncountably many real numbers, and only countably many computer programs, since a program is a finite stream from a finite alphabet, and there are $$\aleph_0$$ of those. and get a block called "quickly smash expression into value"
2022-12-08 02:01:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645981907844543, "perplexity": 841.3383582784151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00687.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2019_AMC_10A_Problems/Problem_12&diff=next&oldid=101502
# Difference between revisions of "2019 AMC 10A Problems/Problem 12" The following problem is from both the 2019 AMC 10A #12 and 2019 AMC 12A #7, so both problems redirect to this page. ## Problem Melanie computes the mean $\mu$, the median $M$, and the modes of the $365$ values that are the dates in the months of $2019$. Thus her data consist of $12$ $1\text{s}$, $12$ $2\text{s}$, . . . , $12$ $28\text{s}$, $11$ $29\text{s}$, $11$ $30\text{s}$, and $7$ $31\text{s}$. Let $d$ be the median of the modes. Which of the following statements is true? $\textbf{(A) } \mu < d < M \qquad\textbf{(B) } M < d < \mu \qquad\textbf{(C) } d = M =\mu \qquad\textbf{(D) } d < M < \mu \qquad\textbf{(E) } d < \mu < M$ 2019 AMC 10A (Problems • Answer Key • Resources) Preceded byProblem 11 Followed byProblem 13 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 10 Problems and Solutions 2019 AMC 12A (Problems • Answer Key • Resources) Preceded byProblem 6 Followed byProblem 8 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 12 Problems and Solutions
2020-09-26 18:45:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4005162715911865, "perplexity": 2828.9775934345666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244353.70/warc/CC-MAIN-20200926165308-20200926195308-00721.warc.gz"}
https://proxies-free.com/tag/analysis/
## calculus – What are the steps in breaking down the exponent in this limit analysis? I’m trying to understand the reasoning in the following step of a limit analysis: $$lim_{n to infty} nleft(left(1- frac{1+c}{frac{n}{ln(n)}} right)^{frac{n}{ln (n)}}right)^{(n-1)ln n/n} = lim_{n to infty} ne^{-((1+c)ln(n))}$$ I understand the “inner” part; $$lim_{n to infty} (1-frac{1+c}{frac{n}{ln (n)}})^{frac{n}{ln n}} = e^{-(1+c)}.$$ And I sort of see that outer exponent $$((n-1)ln n) /n = (ln n) – (ln n / n)$$ and the second part goes to 0, but it’s not clear to me what rules actually justify “bringing the limit to the exponent”. What are the actual steps involved in deducing this limit? More generally, these types of asymptotic analysis show up in comp sci all the time and I feel there is a bag of tricks that I am missing. ## fa.functional analysis – Decomposition of a function into right-sided and left-sided function Here I define a distribution $$fin D’$$ to be right-sided if supp $$fsubseteq (0,infty)$$ and defnote it by $$f_+$$ and if the supp $$fsubseteq (-infty,0)$$ it is called left-sided and denoted by $$f_-$$. Now, it is claimed that if $$f$$ is locally integrable function on $$mathbb{R}$$, then there is a unique decomposition $$f=f_++f_-$$ where $$f_+$$ is right-sided locally integrable function and $$f_-$$ is left sided locally integrable function. For an example: If I have $$A(omega)=frac{1}{omega^2+9}$$ then I can find a decomposition $$A_+(omega)=frac{i}{6(omega+3i)}$$ and $$A_-(omega)=frac{-i}{6(omega-3i)}$$ by inspection. But, How do I find such a decomposition for function like $$e^{-a x}theta(-x)$$ where $$theta$$ is Heaviside step function? Is there a general process to find such a decomposition? ## Complex Analysis: Decomposing function to right-sided and left sides function Here I define a distribution $$fin D’$$ to be right-sided if supp $$fsubseteq (0,infty)$$ and defnote it by $$f_+$$ and if the supp $$fsubseteq (-infty,0)$$ it is called left-sided and denoted by $$f_-$$. Now, it is claimed that if $$f$$ is locally inferable function on $$mathbb{R}$$, then there is a unique decomposition $$f=f_++f_-$$ where $$f_+$$ is right-sided locally integrable function and $$f_-$$ is left sided locally integrable function. For an example: If I have $$A(omega)=frac{1}{omega^2+9}$$ then I can find a decomposition $$A_+(omega)=frac{i}{6(omega+3i)}$$ and $$A_-(omega)=frac{-i}{6(omega+3i)}$$ by inspection. But, How do I find such a decomposition for function like $$e^{-a x}theta(-x)$$ where $$theta$$ is Heaviside step function? Is there a general process to find such a decomposition? ## functional analysis – construction of a function and Lebesgue measure I’ve spent weeks with an interesting problem in my head, the problem would say something like Build, if possible, a continuous and unbounded function f ∈ L1 ((0, ∞)). In case the above is possible, say if it is possible to make the construction so that the exact value of ∫(between 0 and ∞)fdx. Someone dares to solve it ## fa.functional analysis – Bochner integral in a Fréchet space I have a Fréchet space $$V$$ whose topology is (if it helps) induced by a family $$mathcal{P}$$ of norms – not just seminorms – and on this space I have a Borel probability measure $$nu$$. Now, I would like to see whether it is possible to make sense of the integral $$begin{equation} int_V x , mathrm{d} nu left( x right) end{equation}$$ within $$V$$. The measure is such that I actually can prove that the integral exists as a Bochner integral in some Banach space completions of $$V$$ with respect to some of the norms in $$mathcal{P}$$. But I am far away from being able to prove this for all of these norms. Is there perhaps a different way to define Bochner integrals in Fréchet spaces? I only know the usual Banach space setting and it seems to not be enough in this case. ## analysis – The Denseness of Q Consider $$a, b ∈ R$$ where $$a < b$$. Use Denseness of $$mathbb Q$$ to show there are infinitely many rationals between $$a$$ and $$b$$. I have chosen to answer this thing using induction. I know $$P_1$$ is a true assertion since by the denseness of $$mathbb Q$$ there exists a rational, $$r_1$$ such that $$a. I can then assume $$P_n$$ is true and that there are $$n$$ distinct rationals between $$a$$ and $$b$$ of the form $$a This is where I’m stuck but I know I want to use the denseness of $$mathbb Q$$ again to say since $$a, I can find a rational $$r_{n+1}$$. At the same time, I don’t know what it is about $$mathbb Q$$ that allows me to say it. ## I will make a technical SEO audit report and competitor analysis for \$10 #### I will make a technical SEO audit report and competitor analysis I will audit your website manually and search for any possible issues that prevent your site from ranking in search engines. Also, I will provide the recommendation that you need to make to improve your site’s rankings. This technical SEO audit report includes opportunity analysis. You will get new growth opportunities And a low chance of getting a decline in your website and business’s sales traffic and ranking. What I Offer: • Competitive Analysis Recommendations • Recommendation / Suggestions to Improve your Website rankings • Report with your logo and themed colors • Only Quality Work • Client Satisfaction 100% • Proper SEO Audit Report and Action Plan If you have any questions about the information’s given above, feel free to contact me. If you have any question about the information’s given above, feel free to contact me. If you have any question about the information’s given above, feel free to contact me. . ## functional analysis – How is this property equivalent to the Reiter Property? We have the Reiter Property $$(R_2)$$ for an action of a group G on a set X: For any $$epsilon>0$$, any finite subset $$S$$ of G, there exists $$phiin{ell^2(X)}$$ such that $$|sphi-phi|_{ell^2} for all $$sin{S}$$. I am trying to show this is equivalent to the alternative property $$(R_2)’$$: for any $$epsilon>0$$, any finite subset $$S$$ of G, there exists $$phiin{ell^2(X)}$$ such that $$left|frac{1}{|S|}sum_{sin{S}}{sphi}right|_{ell^2}>(1-epsilon)|phi|_{ell^2}$$ but I am completely stuck. I have tried using some uniform convexity since $$ell^2$$ has an inner product, but can only get anything out of it when $$|S|=2$$, I have heard from someone else that this is related to adjoint operators, so I have tried defining $$T:ell^2(X)rightarrowell^2(X)$$ by $$T(phi)=frac{1}{|S|}sum_{sin{S}}{sphi}$$ and can deduce that it has norm 1 and, if we extend S to also contain the inverses of all its elements, is self adjoint, but I can’t see how this could be helpful to solve the problem. Many thanks. ## runtime analysis – How to analyse the worst-case time complexity of this algorithm(a mix of Bubble Sort and Merge Sort)? Suppose I have a sorting algorithm that sorts a list integers. When the input size(the number of elements) $$n$$ is odd, it sorts using Bubble Sort and for even $$n$$ it uses Merge Sort. How do we perform the worst-case time complexity analysis for this algorithm? The context in which this question came about is when I was going through the analysis of MAX-HEAPIFY algorithm given in CLRS(3rd edition) on page 154. In the worst-case analysis, the author had assumed some arbitrary input size $$n$$ and then concluded that the worst case occurs when the bottom-most level of the heap is exactly half full. This threw me off since in various texts and articles, $$n$$ is assumed to be fixed when performing the worst case analysis(and even for best or average cases for that matter) and that the number of elements at the bottom-most level of a heap of $$n$$ nodes is fixed. In that light, I concocted this algorithm so as to have the worst case dependent on $$n$$. My intuition tells me that the worst case time complexity for this algorithm is $$mathcal O(n^2)$$ since that’s the worst case runtime for Bubble Sort. But I want to know the precise mathematical formulation of the worst-case time complexity analysis for any algorithm. Any insight would be much appreciated. ## I will do 50 longtail SEO keyword research and competitor analysis for your website rank in google for \$100 #### I will do 50 longtail SEO keyword research and competitor analysis for your website rank in google Are you in search of Best SEO Keyword Research & Competitor Analysis Service? Keywords are the backbone of your Digital Existence. How can you grow your business or website without a solid foundation? Proper Seo Key word Research is necessary to survive and distinguish your digital existence. Are you looking for SEO keyword research? You are at the right place! Keyword Research is the 1st and most important step for SEO. If you don’t invest in proper research, all the work afterwards, will be useless and a big waste of time and money. My SEO Keyword Research Gig Includes: • List of Manually Selected Kw’s • Monthly Search Volume • Kw’s Competition • Click Per Cost (CPC) • Keyword Intent • Click Through Rte (CTR) • Detailed Excel Report • Country-Specific Search Volume Competitor Analysis: • Competitor’s Top Pages • Competitor’s Organic Kw’s
2021-01-15 21:30:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 80, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5211783051490784, "perplexity": 662.00641865532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703496947.2/warc/CC-MAIN-20210115194851-20210115224851-00639.warc.gz"}
http://qpr.ca/blogs/topics/technical-issues/technology/sustainability/
## sustainability ### Energy, the Environment, and What We Can Do Monday, April 9th, 2012 John Baez gave a Google Tech Talk on the issue. The slides include links to more detailed arguments and his home page also links to the Azimuth Project wiki is collecting information and ideas from a larger group of participants. ### Sustainable Energy Choices Wednesday, June 29th, 2011 Barry Brooks at 'BraveNewClimate' has made a brave effort at summing up the need for nuclear power as part of the CO2-free mix in a brief video, but parts of it still felt to me like “industry propaganda” – to the extent that I might be a bit embarrassed if anyone seeing my earlier references to the BNC site should subsequently come across it. My first concern is that very little argument is given to support the claim that non-nuclear options won't suffice. No-one is likely to be convinced that just because Denmark has not yet displaced anything close to the major part of their coal use with wind that they may not eventually do so (though I suspect that in fact they won't), and the use of that as an apparent argument will just make the case seem weak and forced. Another point that troubles me is at the conclusion where the video compares the golf ball sized lump of nuclear fuel that is capable of providing enough energy to meet the needs of a typical western human lifetime with the many tons of coal that it would "displace". I suspect that this will seem obviously “unfair” even to those who cannot say why (The only comparison that really matters is with volume of ore rather than volume of fuel). Of course is hard to tell the full story so briefly, but if it can’t be done well enough then it were better not done at all. The BNC site has a lot of credibility but the video actually undermines it so I actually hope  it doesn’t "go viral". ### The Inheritors of What? Thursday, November 18th, 2010 A new book by Eric Kaufmann entitled Shall the Religious Inherit the Earth?: Demography and Politics in the Twenty-First Century is reviewed by Phillip Longman in 'Big Questions Online'. An open question, I guess, is whether or not there is an inheritable tendency towards religiosity and, if so, how it is related to fertility within a particular society. But a bigger related question may be:  if the over-breeders can't be stopped, then what kind of earth will there be for them to inherit? ### Artificial Leaves from North Carolina Sunday, October 3rd, 2010 Thanks to reader Colleen McGuire for pointing out  this interesting development. It does look promising if it can be developed further, although as one of the researchers said, "We do not want to overpromise at this stage, as the devices are still of relatively low efficiency and there is a long way to go before this can become a practical technology." ...more » Sunday, March 22nd, 2009 Many of these Top 10 Myths about Sustainability are mythical in the sense that they are just elementary misconceptions that don't qualify as myths because they are not widely held by intelligent adults, but "Myth 6: Sustainability means lowering our standard of living" is an exception because it is, I think, widely believed by intelligent adults. ...more » ### Yes, There *IS* an Elephant in the Room Saturday, March 14th, 2009 ### David MacKay: Sustainable Energy - Without the Hot Air Thursday, January 15th, 2009 In Sustainable Energy - without the hot air UK physicist David MacKay presents plausible back-of-the-envelope estimates of the scales of action needed under various strategies for reduction of global carbon fuel combustion. The numbers he uses are easily checked and his analysis can be re-run with revised parameters if needed. Only when a significant fraction of humanity is capable of actually doing both those things will we have any chance of making the right decisions. ### Mythical Myths No 7 Thursday, September 18th, 2008 In "The Myth of the Tragedy of the Commons", Ian Angus claims that the phenomenon commonly called a "tragedy of the commons" is a myth. But he is wrong. Anyone who is aware of the fate of the Atlantic cod fishery must know the tragedy of an unregulated commons, so the phenomenon is surely real. It is real, and Angus has been blinded by his anger at those who have (ab)used the phenomenon into denying the phenomenon itself rather than the arguments by which it has been (falsely) claimed to justify privatization of public assets. ...more » ### Don’t Drink the Nuclear Kool-Aid | AlterNet Wednesday, July 23rd, 2008 Don’t Drink the Nuclear Kool-Aid | AlterNet ### CO2 Reduction Scenarios (UK example) Wednesday, June 25th, 2008 ### Fare-Free Public Transit Thursday, July 26th, 2007 AlterNet: Environment: Fare-Free Public Transit Could Be Headed to a City Near You (and IMO it could and should be paid for with the revenue from a levy on urban auto traffic like the 'congestion fee' charged in London) ### AlterNet: Environment: What to Say to Those Who Think Nuclear Power Will Save Us Wednesday, July 25th, 2007 AlterNet: has reprinted an article from 'Orion' magazine by someone called Rebecca Solnit who claims to be giving advice on: What to Say to Those Who Think Nuclear Power Will Save Us though what she is really arguing is not just that nuclear power is not a panacaea, but that it must be excluded at all costs - and the tone of her argument (as with many on the climate bandwagon) makes it plain that she is more interested in using the threat of global warming to justify the imposition of behavioural constraints than she is on actually doing everything possible to reduce our emissions of CO2. Unfortunately she has nothing new to add to the debate, but some of the exchanges in the comments are more interesting. Much of the discussion on both sides is sufficiently vacuous and polemical as to strain one's faith in democracy, but at least some of it is decent and it is up to the reader to assess which of the commenters appear to have the more credible arguments. Personally, I come down pretty firmly on the side of those who see a substantial increase in nuclear power generation as an essential component of any strategy for the mitigation of our environmental impact. But even with it, and with everything else we can possibly do, we're headed to where snowballs have no chance unless the other big unmentionable, population control, is also pushed hard and fast. ### Video Debate on Nuclear Power Thursday, July 5th, 2007 #### Peter Bradford, Patrick Moore and Jim Riccio debate the future of nuclear power and why nuclear power cannot solve the climate crisis. The actual debate linked to from Nuclear Information and Resource Service - NIRS is quite interesting, but the intro (including the above description) and the powerpoint style summary notes alongside the video are a disgrace. They accurately summarise the points made by Riccio and Bradford but distort or contradict those made by Moore. In fact Bradford was the most credible presenter followed pretty closely by Moore with Riccio being just totally unimpressive. I think I'll cancel my Greenpeace support. ### Kudos to Fox News ? Thursday, March 15th, 2007 Thanks to Theodore Labadie who posted the link on the 'Transforming Langara' listserv, but this is not surprising. The interview subject is promoting the purchase of "carbon offsets" and the opportunities for fraud in that are so magnificent that no self-respecting greedy mogul could possibly hold back for long. ### BC-Alberta Trade Agreement Scuppers CO2 Reduction in BC Wednesday, February 21st, 2007 This Tyee Article about TILMA shows how its restrictions on differential regulation make it virtually impossible for one province to implement stricter standards (in any area). ### AlterNet: EnviroHealth: Renewables Can Turn the Tide on Global Warming Monday, February 12th, 2007 ### Why I Have Not Read 'Heat' Friday, February 2nd, 2007 According to his publisher's web blurb, George Monbiot has established that "we need a 90% cut in our emissions within 25 years if we are to stop ourselves reaching the point where the "climate feedback" becomes unstoppable", and "for the first time, ... explains how this cut could be achieved. " My problem is more with the latter than the former (although I am doubtful that the 90% cut can reliably to be shown to be either necessary or sufficient), and arises largely because the blurb reads to me like one for a secret 'get rich quick' scheme whose author will tell me how to make millions doing absolutely nothing in five easy steps (which just happen to be available only in book form). If Monbiot can't tell me in one page what the essence of his strategy is then he probably can't do it in a book either. But just in case I'm missing out on hearing about the magic bullet that noone else has thought of I'll be going to today's seminar. And I urge anyone else at Langara who cares about the very real threat of global warming to do likewise. ### Lights Out! Thursday, February 1st, 2007 Unfortunately I don't think this initiative has had enough publicity in this part of the world for there to be a noticeable dip as seen by BC Hydro but there's nothing wrong with joining in to the extent that we are able. It's a shame that we are not using air conditioning at this time of year, but perhaps there are some other College-wide systems (other than computers) which could be temporarily shut down. And perhaps with regard to computers we could ask ICS to use those 5 min to test our emergency power supply system. Of course if we wanted to send the strongest possible correct "signal" then we could start up the air conditioning and every available electric heater etc (arc welders would be good if we have them) earlier that morning then turn them all off at 10:55 and on again at 11:00 for another half hour or so (and off again at random times after that). But I suppose that might be seen as a cynical and dishonest manipulation of the data. ### Selling Indulgences Monday, January 29th, 2007 A couple of colleagues have circulated links to websites offering the opportunity to offset the CO2 created by my energy consumption in return for monetary payment. eg Thanks for the links, but all of these people are asking for money and offering little but vague assertions in return. This is not intended to deny the good intentions of either the Native Power people linked to by Al Gore or of the IRES folks and their friends at WestJet, or I guess of any other chap who puts up a website and offers a $60 absolution for the CO2 I spewed on my flight to India last year. But none of these sites offer convincing proof that my$60 payment will somehow suck back all that CO2. So how can I tell that what salves my conscience will indeed undo the effects of my sin? This business of buying remission reminded me of the mediaeval practice of selling indulgences and a quick Google search confirmed that I was not the first to make that connection: Monbiot.com � Selling Indulgences ### How Much Renewable Energy Do We Have? Monday, January 29th, 2007 George Monbiot showed (in November 2005) that, even with extremely generous assumptions about the plausible extent of resource usage, renewable energy sources will not suffice to replace what he believes must be cut from our carbon combustion rate. But then (in July 2006) he continued to deny what may be the only feasible solution, despite recognizing many errors in previous arguments against it, on the grounds that "To start building a new generation of nuclear power stations before we know what to do with the waste produced by existing plants is grotesquely irresponsible." This while blithely suggesting as an alternative that "With similar levels of investment in energy efficiency and carbon capture and storage, and the exploitation of the vast new offshore wind resources the government has now identified(13), we could cut our carbon emissions as swiftly and as effectively as any atomic power programme could." But the technology of capture and sequestration is far from well established and the wind power he refers to is just what he showed in the article above to be far less than enough to meet his country's needs. He does conclude by mentioning that neither the gas nor the wind resources in North America are proportionately nearly as large as those of the UK.
2018-06-18 01:40:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42237505316734314, "perplexity": 2076.5228816427084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859923.59/warc/CC-MAIN-20180618012148-20180618032148-00386.warc.gz"}
https://docs.microsoft.com/en-us/dotnet/framework/winforms/controls/controls-to-use-on-windows-forms
# Controls to Use on Windows Forms The following is an alphabetic list of controls and components that can be used on Windows Forms. In addition to the Windows Forms controls covered in this section, you can add ActiveX and custom controls to Windows Forms. If you do not find the control you need listed here, you can also create your own. For details, see Developing Windows Forms Controls at Design Time. For more information about choosing the control you need, see Windows Forms Controls by Function. Note Visual Basic controls are based on classes provided by the .NET Framework. ## In This Section Windows Forms Controls by Function Lists and describes Windows Forms controls based on the .NET Framework. Controls with Built-In Owner-Drawing Support Describes how to alter aspects of a control's appearance that are not available through properties. BackgroundWorker Component Enables a form or control to run an operation asynchronously. BindingNavigator Control Provides the navigation and manipulation user interface (UI) for controls that are bound to data. BindingSource Component Encapsulates a data source for binding to controls. Button Control Presents a standard button that the user can click to perform actions. CheckBox Control Indicates whether a condition is on or off. CheckedListBox Control Displays a list of items with a check box next to each item. ColorDialog Component Allows the user to select a color from a palette in a pre-configured dialog box and to add custom colors to that palette. ComboBox Control Displays data in a drop-down combo box. Provides users with an easily accessible menu of frequently used commands that are associated with the selected object. Although ContextMenuStrip replaces and adds functionality to the ContextMenu control of previous versions, ContextMenu is retained for both backward compatibility and future use if so desired. Represents a shortcut menu. Although ContextMenuStrip replaces and adds functionality to the ContextMenu control of previous versions, ContextMenu is retained for both backward compatibility and future use if so desired. DataGrid Control Displays tabular data from a dataset and allows for updates to the data source. DataGridView Control Provides a flexible, extensible system for displaying and editing tabular data. DateTimePicker Control Allows the user to select a single item from a list of dates or times. Dialog-Box Controls and Components Describes a set of controls that allow users to perform standard interactions with the application or system. DomainUpDown Control Displays text strings that a user can browse through and select from. ErrorProvider Component Displays error information to the user in a non-intrusive way. FileDialog Class Provides base-class functionality for file dialog boxes. FlowLayoutPanel Control Represents a panel that dynamically lays out its contents horizontally or vertically. FolderBrowserDialog Component Displays an interface with which users can browse and select a directory or create a new one. FontDialog Component Exposes the fonts that are currently installed on the system. GroupBox Control Provides an identifiable grouping for other controls. HelpProvider Component Associates an HTML Help file with a Windows-based application. HScrollBar and VScrollBar Controls Provide navigation through a list of items or a large amount of information by scrolling either horizontally or vertically within an application or control. ImageList Component Displays images on other controls. Label Control Displays text that cannot be edited by the user. ListBox Control Allows the user to select one or more items from a predefined list. ListView Control Displays a list of items with icons, in the manner of Windows Explorer. Displays a menu at run time. Although MenuStrip replaces and adds functionality to the MainMenu control of previous versions, MainMenu is retained for both backward compatibility and future use if you choose. Constrains the format of user input in a form. Provides a menu system for a form. Although MenuStrip replaces and adds functionality to the MainMenu control of previous versions, MainMenu is retained for both backward compatibility and future use if you choose. MonthCalendar Control Presents an intuitive graphical interface for users to view and set date information. NotifyIcon Component Displays icons for processes that run in the background and would not otherwise have user interfaces. NumericUpDown Control Displays numerals that a user can browse through and select from. OpenFileDialog Component Allows users to open files by using a pre-configured dialog box. PageSetupDialog Component Sets page details for printing through a pre-configured dialog box. Panel Control Provide an identifiable grouping for other controls, and allows for scrolling. PictureBox Control Displays graphics in bitmap, GIF, JPEG, metafile, or icon format. PrintDialog Component Selects a printer, chooses the pages to print, and determines other print-related settings. PrintDocument Component Sets the properties that describe what to print, and prints the document in Windows-based applications. PrintPreviewControl Control Allows you to create your own PrintPreview component or dialog box instead of using the pre-configured version. PrintPreviewDialog Control Displays a document as it will appear when it is printed. ProgressBar Control Graphically indicates the progress of an action towards completion. Presents a set of two or more mutually exclusive options to the user. RichTextBox Control Allows users to enter, display, and manipulate text with formatting. SaveFileDialog Component Selects files to save and where to save them. SoundPlayer Class Enables you to easily include sounds in your applications. SplitContainer Control Allows the user to resize a docked control. Splitter Control Allows the user to resize a docked control (.NET Framework version 1.x). StatusBar Control Displays status information related to the control that has focus. Although StatusStrip replaces and extends the StatusBar control of previous versions, StatusBar is retained for both backward compatibility and future use if you choose. StatusStrip Control Represents a Windows status bar control. Although StatusStrip replaces and extends the StatusBar control of previous versions, StatusBar is retained for both backward compatibility and future use if you choose. TabControl Control Displays multiple tabs that can contain pictures or other controls. TableLayoutPanel Control Represents a panel that dynamically lays out its contents in a grid composed of rows and columns. TextBox Control Allows editable, multiline input from the user. Timer Component Raises an event at regular intervals. ToolBar Control Displays menus and bitmapped buttons that activate commands. You can extend the functionality of the control and modify its appearance and behavior. Although ToolStrip replaces and adds functionality to the ToolBar control of previous versions, ToolBar is retained for both backward compatibility and future use if you choose. ToolStrip Control Creates custom toolbars and menus in your Windows Forms applications. Although ToolStrip replaces and adds functionality to the ToolBar control of previous versions, ToolBar is retained for both backward compatibility and future use if you choose. ToolStripContainer Control Provides panels on each side of a form for docking, rafting, and arranging ToolStrip controls, and a central ToolStripContentPanel for traditional controls. ToolStripPanel Control Provides one panel for docking, rafting and arranging ToolStrip controls. ToolStripProgressBar Control Overview Graphically indicates the progress of an action towards completion. The ToolStripProgressBar is typically contained in a StatusStrip. ToolStripStatusLabel Control Represents a panel in a StatusStrip control. ToolTip Component Displays text when the user points at other controls. TrackBar Control Allows navigation through a large amount of information or visually adjusting a numeric setting. TreeView Control Displays a hierarchy of nodes that can be expanded or collapsed. WebBrowser Control Hosts Web pages and provides Internet Web browsing capabilities to your application. Windows Forms Controls Used to List Options Describes a set of controls used to provide users with a list of options to choose from. Windows Forms Controls Explains the use of Windows Forms controls, and describes important concepts for working with them. Developing Windows Forms Controls at Design Time Provides links to step-by-step topics, recommendations for which kind of control to create, and other information about creating your own control. Controls and Programmable Objects Compared in Various Languages and Libraries Provides a table that maps controls in Visual Basic 6.0 to the corresponding control in Visual Basic. Note that controls are now classes in the .NET Framework. How to: Add ActiveX Controls to Windows Forms Describes how to use ActiveX controls on Windows Forms.
2019-05-21 21:09:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17611664533615112, "perplexity": 6095.604759612071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00275.warc.gz"}
https://dsp.stackexchange.com/questions/67407/realtime-sample-rate-conversion
# Realtime sample rate conversion Is it possible to have a realtime sample rate conversion in a way that a peer A with an audio stream at 44.1 KHz sends it signal over the network to another peer that is on an audio stream at a 48 KHz sample rate. If possible, how should this be accomplished? How can we workaround the fact that one peer is consuming data on a different rate than one is producing? Thanks! • The answer is "possible? yes, and audio systems for computers have been doing that since ca 25 years now", "how to accomplish? A resampler" and "workaround different rates? Resampling." May 11, 2020 at 15:11 • It can be done «realtime» with the caveat that you need to introduce a delay for ca 1/2 the filter-length (assuming linear phase filtering). The longer the filter, the more accurate resampling, and the more arithmetic cost. Like others have stated, any textbook on dsp will tell you have to do fractional rate resampling, you just need to distribute that work on packets. May 11, 2020 at 17:35 The consumption time and transmission time is identical: One second of data is still one second of data regardless of sampling time. However if the transmitter and receiver are not synchronized then buffering will ultimately be needed (as further detailed at the end of this post). The greatest common divisor between the two rates is 300, thus to resample this exactly from 44.1KHz to 48KHz you would need to use the ratio $$160/147$$ (and the inverse for the other direction): $$147$$ is factored into $$3, 7^2$$ $$160$$ is factored into $$2^5, 5$$ The following demonstrates one approach to resample from 44.1KHz to 48KHz, where care has been taken to not reduce the sampling rate below 44.1KHz (if that matters for fidelity concerns) and the multiple stages simplifies the filtering needed: Interp by 4, decimate by 3, interp by 8, decimate by 7, interp by 5, decimate by 7. This would be implemented with the following structure where the interpolator blocks signify insert of $$I$$ samples between each sample (up-sampling) and the decimation blocks signify selecting every $$D$$th sample and throwing away the rest (down-sampling). The intermediate blocks can run at any arbitrary higher sampling rate to keep up with the throughput and the input/output blocks are rate matched (consuming samples at 44.1KSps and providing output samples at the 48 KSps rate). To do this as shown where I use a requirement of 20 KHz audio bandwidth and 80 dB resampling image rejection, I estimate that 171 taps would be needed for FIR1, 95 taps for FIR2 and 25 taps for FIR3 (as linear phase filters so one multiplier for every 2 taps). For real-time application, the expected delay through the resampler would be 7.9 ms. The filters could certainly be designed with windowed Sinc functions (this is known as the windowing approach to FIR filter design which is sub-optimal - see our further discussion here FIR Filter Design: Window vs Parks McClellan and Least Squares). The least squares algorithm (firls in MATLAB/Octave and Python) provides an optimal solution for resampling applications, resulting in higher image rejection for a given number of taps. Further, in many resamplers (not this one due to the close ratio), the images to be rejected can be isolated to distinct frequency bands; resulting in the use of multiband filters which the least squares algorithms support and further maximize rejection where it is needed most. Interpolation and decimation resampling can also be accomplished by mapping the same filter coefficients as designed for the resampler shown above into polyphase structures as depicted in the diagram below. This would be identical in performance to the resampler above but can be done with an internal sampling rate as low as 67.2 KSps and significantly fewer overall computations. The decimation is done by selecting the appropriate filter output associated with the computation cycle for each decimator output rate (for the first two stages this just ends up being that the commutators move back one sample after every output update, and the last stage moves forward one sample after every other update). This can be a very efficient approach since only one of the filters in each stage actually needs to be computed for each output (note that each filter within a group would contain the exact same data but the multiply and sum only needs to be done on one of them each time). Since only one filter in each stage actually needs to be computed on any given cycle, the implementation can be done with just three FIR filters (44-tap, 12-tap and 5 tap) with the coefficients updated from a ROM table for each decimator output computation cycle (this high efficiency approach would require a tightly synchronized state machine, while instead computing all internal filters would allow for a lot of slop with the internal timing above the minimum limits). If 80 dB of image rejection or 20 KHz of audio bandwidth is not necessary, then all filter lengths can be reduced accordingly. Resampling with polyphase filters are detailed further in this post: How to implement Polyphase filter? As Robert points out in the comments, the above would work continuously if the input and output were at the exact frequencies given; or if the input frequency was slightly less as the processing can ensure the next sample is ready prior to being clocked by the output. The problem will be if the input frequency error is slightly higher (or output frequency slightly lower) that lost samples will occur. Some buffering can be provided to sustain modest frequency variation but will invariably overflow. The only robust solution for a real time application without operating time limitation is to ensure the input and output clocks are synchronized through some mechanism. Given the OP is mentioning sending the data over the network, I believe buffering would then be required based on the predicted worst case clock inaccuracies between the two locations and longest time duration of an audio transmission. If the input and output clocks were co-located then they could be PLL locked to each other to minimize any buffering requirements. Note that a creative solution could drive the local clock synchronization based on minimizing the buffer: A buffer half-full flag can be used as the frequency error discriminator to drive the local clock loop! • If the two clocks are independent of each other, you must add the "A" to "SRC". May 11, 2020 at 21:00 • (Got it: Asynchronous Sample Rate Conversion. Yes indeed) May 11, 2020 at 21:33 The usual answer is that that you must convert by integer ratios, therefore 44.1 kHz to 48 kHz requires integer up and down conversions. Since this has been repeated in DSP text books for at least the past 50 years, it's almost always the answer you'll get using a ratio of m/n, where m and n are integers. The typical way uses a windowed sinc function—sinc being the impulse reponse of the ideal lowpass filter, windowed because the function is infinite and we need to make it practical. However, there is no requirement that conversion be integer multiples. The same sinc function method can be used to find an arbitrary step to the next output sample. This may seem to preclude using pre-calculated windowed sinc table for quick lookup, but fortunately the sinc function is relatively smooth, in a similar sense to a sine wave. With a sufficiently oversampled windowed sinc table, the curve between table points is very close to a straight line, and we can use simple linear interpolation. The method is detailed here, by Julius O. Smith, a pdf version is available near the bottom of the page: The Kaiser window (aka Kaiser-Bessel window) is a good choice for audio, not difficult to derive and has a simple way to select stop-band attenuation as a tradeoff with transition-band width, for a given filter length (number of sinc lobes in the table window). You can calculate a windowed sinc table here. Whereas with integer rate conversions, a windowed sinc table can be relatively sparse because you know in advance every point that is required, this non-integer conversion requires a bigger table in order to allow accurate linear interpolation between table points. For instance, integer coversions may use require relatively few table points that result in something like this, with each table point connected: But for this method we need a smoother, more oversampled table. Here is the same table, but with the length four times as big and a Factor one-fourth the amount—we could say this table is the same as before, but oversampled by a factor of four: You can see that simply oversampling by a factor of four has allowed the linear connection between table points to be more accurate. The resampling article goes into detail on the oversampling factor versus accuracy. • and you don't have to do a windowed sinc (although i agree that a Kaiser-windowed sinc is a good interpolation kernel). you can design an even better interpolation kernel using MATLAB's firls() or perhaps firpm(). it's about designing a really good brickwall low-pass FIR filter with a hella lotta taps. May 12, 2020 at 2:25 • for some reason, i am not super ultra impressed with Julius's quantitative perspective. first, it need not be a windowed sinc, so i would not base another general concern such as what the upsample ratio is on that. Duane Wise and i did a much better perspective on determining the upsample ratio based on what kinda interpolation is done between the upsampled points. if your doing linear interpolation, the dB S/N is about 12 dB for each octave of upsampling plus another 12 dB. so 120 dB S/N requires 512x oversampling. May 12, 2020 at 2:33 • Good point, Robert. I chose to "The typical way..." to skirt the issue of why and of other choices. But mainly I wanted to present the idea that, despite the almost universal answer of integer up/down-sampling combinations, there is no reason that it can't be done directly. Of course I'm setting aside possible multistage optimization, that can still be done as well. The bottom line is fixed tables were chosen as the go-to route decades ago for this, due to the cost of calculating on the fly, and that locked in the integer ratio idea. JOS showed a essentially recalculated the table on the fly. May 12, 2020 at 3:25 • @robertbristow-johnson I'm not sure if you caught what's going on here, Robert. Linear interpolation is not used in the sample interpolation, it's used in the already-smooth oversampled win-sinc table, to calculate arbitrary points in the table—not the signal. So the lerp error is in the sinc coefficients, controllable by the degree of pre-oversampled win-sinc table. Actually, maybe you're saying he didn't oversample the sinc enough, which may be true, I haven't read it in many years—I'll take another look atfter dinner... May 12, 2020 at 3:29 • Nigel, if the src ratio is irrational or arbitrsty or varying, some continuous interolation must be used between the discrete subsample times. May 12, 2020 at 12:21
2022-05-20 19:24:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6175208687782288, "perplexity": 1201.3666129252938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00189.warc.gz"}
http://heareresearch.blogspot.com/2015/07/7-13-2015-tlr-expression-boxplot-and.html
## Monday, July 13, 2015 ### 7 13 2015 TLR expression boxplot and statistics To see how TLR compares to the other I ran data cultivated before I put in place the optimization practices for the qPCR. TLR tends to have issues amplifying in Dabob samples which possibly don't express the gene. To get the script to work properly I had to eliminate several samples which had very low or very high expression values which were errors generated by qpcR.  TLRrepscript.R #Load in required packages for functions below require(qpcR) ## Loading required package: qpcR ## Loading required package: Matrix require(plyr) ## Loading required package: plyr require(ggplot2) ## Loading required package: ggplot2 require(splitstackshape) ## Loading required package: splitstackshape ## Loading required package: data.table #Read in raw fluorescence data from 1st Actin replicate #Remove blank first column entitled "X" rep1$X<-NULL #Rename columns so that qpcR package and appropriately handle the data rep1<-rename(rep1, c("Cycle" = "Cycles", "A1" = "H_C_1", "A2" = "N_C_1", "A3"= "S_C_1", "A4"="H_T_1", "A5"="N_T_1","A6"="S_T_1", "A7"="NT_C_1","B1" = "H_C_2", "B2" = "N_C_2","B3"= "S_C_2", "B4"="H_T_2", "B5"="N_T_2", "B6"="S_T_2","B7"="NT_C_2", "C1" = "H_C_3", "C2" = "N_C_3","C3"= "S_C_3","C4"="H_T_3", "C5"="N_T_3", "C6"="S_T_3", "C7"="NT_C_3","D1" = "H_C_4", "D2" = "N_C_4","D3"= "S_C_4", "D4"="H_T_4", "D5"="N_T_4", "D6"="S_T_4", "D7"="NT_C_4","E1" = "H_C_5", "E2" = "N_C_5", "E3"= "S_C_5", "E4"="H_T_5", "E5"="N_T_5", "E6"="S_T_5", "F1" = "H_C_6", "F2" = "N_C_6","F3"= "S_C_6", "F4"="H_T_6", "F5"="N_T_6", "F6"="S_T_6","G1" = "H_C_7", "G2" = "N_C_7", "G3"= "S_C_7", "G4"="H_T_7", "G5"="N_T_7", "G6"="S_T_7", "H1" = "H_C_8", "H2" = "N_C_8","H3"= "S_C_8", "H4"="H_T_8", "H5"="N_T_8", "H6"="S_T_8")) #Run data through pcrbatch in qpcR package which analyzes fluorescence and produces efficiency and cycle threshold values rep1ct<-pcrbatch(rep1, fluo=NULL) ## Making model for H_C_1 (l4) ## => Fitting passed... ## ## Making model for N_C_1 (l4) ## => Fitting passed... ## ## Making model for S_C_1 (l4) ## => Fitting passed... ## ## Making model for H_T_1 (l4) ## => Fitting passed... ## ## Making model for N_T_1 (l4) ## => Fitting passed... ## ## Making model for S_T_1 (l4) ## => Fitting passed... ## ## Making model for NT_C_1 (l4) ## => Fitting passed... ## ## Making model for H_C_2 (l4) ## => Fitting passed... ## ## Making model for N_C_2 (l4) ## => Fitting passed... ## ## Making model for S_C_2 (l4) ## => Fitting passed... ## ## Making model for H_T_2 (l4) ## => Fitting passed... ## ## Making model for N_T_2 (l4) ## => Fitting passed... ## ## Making model for S_T_2 (l4) ## => Fitting passed... ## ## Making model for NT_C_2 (l4) ## => Fitting passed... ## ## Making model for H_C_3 (l4) ## => Fitting passed... ## ## Making model for N_C_3 (l4) ## => Fitting passed... ## ## Making model for S_C_3 (l4) ## => Fitting passed... ## ## Making model for H_T_3 (l4) ## => Fitting passed... ## ## Making model for N_T_3 (l4) ## => Fitting passed... ## ## Making model for S_T_3 (l4) ## => Fitting passed... ## ## Making model for NT_C_3 (l4) ## => Fitting passed... ## ## Making model for H_C_4 (l4) ## => Fitting passed... ## ## Making model for N_C_4 (l4) ## => Fitting passed... ## ## Making model for S_C_4 (l4) ## => Fitting passed... ## ## Making model for H_T_4 (l4) ## => Fitting passed... ## ## Making model for N_T_4 (l4) ## => Fitting passed... ## ## Making model for S_T_4 (l4) ## => Fitting passed... ## ## Making model for NT_C_4 (l4) ## => Fitting passed... ## ## Making model for H_C_5 (l4) ## => Fitting passed... ## ## Making model for N_C_5 (l4) ## => Fitting passed... ## ## Making model for S_C_5 (l4) ## => Fitting passed... ## ## Making model for H_T_5 (l4) ## => Fitting passed... ## ## Making model for N_T_5 (l4) ## => Fitting failed. Tagging name of N_T_5... ## ## Making model for S_T_5 (l4) ## => Fitting passed... ## ## Making model for H_C_6 (l4) ## => Fitting passed... ## ## Making model for N_C_6 (l4) ## => Fitting passed... ## ## Making model for S_C_6 (l4) ## => Fitting passed... ## ## Making model for H_T_6 (l4) ## => Fitting passed... ## ## Making model for N_T_6 (l4) ## => Fitting passed... ## ## Making model for S_T_6 (l4) ## => Fitting passed... ## ## Making model for H_C_7 (l4) ## => Fitting passed... ## ## Making model for N_C_7 (l4) ## => Fitting passed... ## ## Making model for S_C_7 (l4) ## => Fitting passed... ## ## Making model for H_T_7 (l4) ## => Fitting passed... ## ## Making model for N_T_7 (l4) ## => Fitting passed... ## ## Making model for S_T_7 (l4) ## => Fitting passed... ## ## Making model for H_C_8 (l4) ## => Fitting passed... ## ## Making model for N_C_8 (l4) ## => Fitting passed... ## ## Making model for S_C_8 (l4) ## => Fitting passed... ## ## Making model for H_T_8 (l4) ## => Fitting passed... ## ## Making model for N_T_8 (l4) ## => Fitting passed... ## ## Making model for S_T_8 (l4) ## => Fitting passed... ## ## Calculating delta of first/second derivative maxima... ## .........10.........20.........30.........40.........50 ## .. ## Found univariate outlier for NT_C_3 NT_C_4 S_T_5 H_T_6 H_T_7 ## Tagging name of NT_C_3 NT_C_4 S_T_5 H_T_6 H_T_7 ... ## Analyzing H_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing NT_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing NT_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing **NT_C_3** ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing **NT_C_4** ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing *N_T_5* ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing **S_T_5** ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing **H_T_6** ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing **H_T_7** ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... #pcrbatch creates a file with each sample as an individual column in the dataframe. The problem with this is #that I want to compare all the Ct (labelled sig.cpD2) and generate expression data for them but these values have to be #in individual columns. To do this I must transpose the data and set the first row as the column names. rep1res<-setNames(data.frame(t(rep1ct)),rep1ct[,1]) #Now I must remove the first row as it is a duplicate and will cause errors with future analysis rep1res<-rep1res[-1,] #since the sample names are now in the first column the column title is row.names. This makes analys hard based on the ability to call the first column. #to eliminate this issue, I copied the first column into a new column called "Names" rep1res$Names<-rownames(rep1res) #Since each sample name contains information such as Population, Treatment, and Sample Number I want to separate out these factors #into new columns so that I can run future analysis based on population, treatment, or both. Also note the "drop = F" this is so the original names column remains. rep1res2<-cSplit_f(rep1res, splitCols=c("Names"), sep="_", drop = F) #After splitting the names column into three new columns I need to rename them appropriately. rep1res2<-rename(rep1res2, c("Names_1"="Pop", "Names_2"="Treat", "Names_3"="Sample")) #I also create a column with the target gene name. This isn't used in this analysis but will be helpful for future work. rep1res2$Gene<-rep("TLR", length(rep1res2)) #In transposing the data frame, the column entries became factors which cannot be used for equations. #to fix this, I set the entries for sig.eff (efficiency) and sig.cpD2 (Ct value) to numeric. Be aware, without the as.character function the factors will be transformed inappropriately. rep1res2$sig.eff<-as.numeric(as.character(rep1res2$sig.eff)) rep1res2$sig.cpD2<-as.numeric(as.character(rep1res2sig.cpD2)) #Now I plot the Ct values to see how they align without converting them to expression. ggplot(rep1res2, aes(x=Names,y=sig.cpD2, fill=Pop))+geom_bar(stat="identity") #Now I want to get expression information from my data set. qpcR has a way of doing this but its complicated and I'm not comfortable using it. #Luckily there is an equation I can use to do it. The equation is expression = 1/(1+efficiency)^Ctvalue. I tried multiple ways to get this to work in R #but it doesn't handle the complicated equation easily. #To work around this, I created a function in R to run the equation and produce an outcome. x = efficiency argument, y=Ctvalue argument expr<-function(x,y){ newVar<-(1+x)^y 1/newVar } #Now I run the data through the function and produce a useful expression value rep1res2expression<-expr(rep1res2$sig.eff, rep1res2$sig.cpD2) #Graphing the expression values is a good way to examine the data quickly for errors that might have occurred. ggplot(rep1res2, aes(x=Names,y=expression, fill=Pop))+geom_bar(stat="identity") #Before I'm able to compare the replicates I need to process the raw fluorescence from the second Actin run. #To do this I perform all the same steps as the previous replicate. rep2$X<-NULL rep2<-rename(rep2, c("Cycle" = "Cycles", "A1" = "H_C_1", "A2" = "N_C_1", "A3"= "S_C_1", "A4"="H_T_1", "A5"="N_T_1","A6"="S_T_1", "A7"="NT_C_1","B1" = "H_C_2", "B2" = "N_C_2","B3"= "S_C_2", "B4"="H_T_2", "B5"="N_T_2", "B6"="S_T_2","B7"="NT_C_2", "C1" = "H_C_3", "C2" = "N_C_3","C3"= "S_C_3","C4"="H_T_3", "C5"="N_T_3", "C6"="S_T_3", "C7"="NT_C_3","D1" = "H_C_4", "D2" = "N_C_4","D3"= "S_C_4", "D4"="H_T_4", "D5"="N_T_4", "D6"="S_T_4", "D7"="NT_C_4","E1" = "H_C_5", "E2" = "N_C_5", "E3"= "S_C_5", "E4"="H_T_5", "E5"="N_T_5", "E6"="S_T_5", "F1" = "H_C_6", "F2" = "N_C_6","F3"= "S_C_6", "F4"="H_T_6", "F5"="N_T_6", "F6"="S_T_6","G1" = "H_C_7", "G2" = "N_C_7", "G3"= "S_C_7", "G4"="H_T_7", "G5"="N_T_7", "G6"="S_T_7", "H1" = "H_C_8", "H2" = "N_C_8","H3"= "S_C_8", "H4"="H_T_8", "H5"="N_T_8", "H6"="S_T_8")) rep2ct<-pcrbatch(rep2, fluo=NULL) ## Making model for H_C_1 (l4) ## => Fitting passed... ## ## Making model for N_C_1 (l4) ## => Fitting passed... ## ## Making model for S_C_1 (l4) ## => Fitting passed... ## ## Making model for H_T_1 (l4) ## => Fitting passed... ## ## Making model for N_T_1 (l4) ## => Fitting passed... ## ## Making model for S_T_1 (l4) ## => Fitting passed... ## ## Making model for NT_C_1 (l4) ## => Fitting passed... ## ## Making model for H_C_2 (l4) ## => Fitting passed... ## ## Making model for N_C_2 (l4) ## => Fitting passed... ## ## Making model for S_C_2 (l4) ## => Fitting passed... ## ## Making model for H_T_2 (l4) ## => Fitting passed... ## ## Making model for N_T_2 (l4) ## => Fitting passed... ## ## Making model for S_T_2 (l4) ## => Fitting passed... ## ## Making model for NT_C_2 (l4) ## => Fitting passed... ## ## Making model for H_C_3 (l4) ## => Fitting passed... ## ## Making model for N_C_3 (l4) ## => Fitting passed... ## ## Making model for S_C_3 (l4) ## => Fitting passed... ## ## Making model for H_T_3 (l4) ## => Fitting passed... ## ## Making model for N_T_3 (l4) ## => Fitting passed... ## ## Making model for S_T_3 (l4) ## => Fitting passed... ## ## Making model for NT_C_3 (l4) ## => Fitting passed... ## ## Making model for H_C_4 (l4) ## => Fitting passed... ## ## Making model for N_C_4 (l4) ## => Fitting passed... ## ## Making model for S_C_4 (l4) ## => Fitting passed... ## ## Making model for H_T_4 (l4) ## => Fitting passed... ## ## Making model for N_T_4 (l4) ## => Fitting passed... ## ## Making model for S_T_4 (l4) ## => Fitting passed... ## ## Making model for NT_C_4 (l4) ## => Fitting passed... ## ## Making model for H_C_5 (l4) ## => Fitting passed... ## ## Making model for N_C_5 (l4) ## => Fitting passed... ## ## Making model for S_C_5 (l4) ## => Fitting passed... ## ## Making model for H_T_5 (l4) ## => Fitting passed... ## ## Making model for N_T_5 (l4) ## => Fitting passed... ## ## Making model for S_T_5 (l4) ## => Fitting passed... ## ## Making model for H_C_6 (l4) ## => Fitting passed... ## ## Making model for N_C_6 (l4) ## => Fitting passed... ## ## Making model for S_C_6 (l4) ## => Fitting passed... ## ## Making model for H_T_6 (l4) ## => Fitting passed... ## ## Making model for N_T_6 (l4) ## => Fitting passed... ## ## Making model for S_T_6 (l4) ## => Fitting passed... ## ## Making model for H_C_7 (l4) ## => Fitting passed... ## ## Making model for N_C_7 (l4) ## => Fitting passed... ## ## Making model for S_C_7 (l4) ## => Fitting passed... ## ## Making model for H_T_7 (l4) ## => Fitting passed... ## ## Making model for N_T_7 (l4) ## => Fitting passed... ## ## Making model for S_T_7 (l4) ## => Fitting passed... ## ## Making model for H_C_8 (l4) ## => Fitting passed... ## ## Making model for N_C_8 (l4) ## => Fitting passed... ## ## Making model for S_C_8 (l4) ## => Fitting passed... ## ## Making model for H_T_8 (l4) ## => Fitting passed... ## ## Making model for N_T_8 (l4) ## => Fitting passed... ## ## Making model for S_T_8 (l4) ## => Fitting passed... ## ## Calculating delta of first/second derivative maxima... ## .........10.........20.........30.........40.........50 ## .. ## Found univariate outlier for H_T_2 ## Tagging name of H_T_2 ... ## Analyzing H_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing NT_C_1 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing **H_T_2** ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing NT_C_2 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing NT_C_3 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing NT_C_4 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_5 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_6 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_7 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_C_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_C_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_C_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing H_T_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing N_T_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... ## ## Analyzing S_T_8 ... ## Calculating 'eff' and 'ct' from sigmoidal model... ## Using window-of-linearity... ## Fitting exponential model... ## Using linear regression of efficiency (LRE)... rep2res<-setNames(data.frame(t(rep2ct)),rep2ct[,1]) rep2res<-rep2res[-1,] rep2res$Names<-rownames(rep2res) rep2res2<-cSplit_f(rep2res, splitCols=c("Names"), sep="_", drop = F) rep2res2<-rename(rep2res2, c("Names_1"="Pop", "Names_2"="Treat", "Names_3"="Sample")) rep2res2$Gene<-rep("TLR", length(rep2res2)) rep2res2$sig.eff<-as.numeric(as.character(rep2res2$sig.eff)) rep2res2$sig.cpD2<-as.numeric(as.character(rep2res2$sig.cpD2)) ggplot(rep2res2, aes(x=Names,y=sig.cpD2, fill=Pop))+geom_bar(stat="identity") expr<-function(x,y){ newVar<-(1+x)^y 1/newVar } rep2res2$expression<-expr(rep2res2$sig.eff, rep2res2$sig.cpD2) ggplot(rep2res2, aes(x=Names,y=expression, fill=Pop))+geom_bar(stat="identity") #Now that I have Ct values, efficiencies and expression values for both replicates I can create a table of the differences between reps. #To do this I create a data frame with a single formula that creates a column of values generated by subtracting the first run from the second. repcomp<-as.data.frame(rep1res2$sig.cpD2-rep2res2$sig.cpD2) #Now I need to add some Names for the samples to use with ggplot.Since the names column contains all the relevant information #I copy only that column and run the split function on it again as well as the rename function. repcomp$Names<-rep1res2$Names repcomp<-cSplit_f(repcomp, splitCols=c("Names"), sep="_", drop = F) #To better address the difference column in ggplot I need to rename it something simple and short. repcomp<-rename(repcomp, c("rep1res2$sig.cpD2 - rep2res2$sig.cpD2"="rep.diff", "Names_1"="Pop", "Names_2"="Treat", "Names_3"="Sample")) #Now I just run the data through ggplot to generate a bar graph exploring the differences between the two replicate in terms of Ct values. ggplot(repcomp, aes(x=Names, y=rep.diff, fill=Pop))+geom_bar(stat="identity") tlr<-as.data.frame(cbind(rep1res2$expression,rep1res2$Names,rep1res2$Pop,rep1res2$Treat,rep2res2$expression)) tlr<-rename(tlr, c(V1="rep1.expr","V2"="name","V3"="pop","V4"="treat" ,"V5"="rep2.expr")) tlr$rep1.expr<-as.numeric(as.character(tlr$rep1.expr)) tlr$rep2.expr<-as.numeric(as.character(tlr$rep2.expr)) tlr$avgexpr<-rowMeans(tlr[,c("rep1.expr","rep2.expr")],na.rm=F) tlr<-tlr[which(tlr$name!=c("H_C_3")),] tlr<-tlr[which(tlr$name!=c("H_T_2")),] tlr<-tlr[which(tlr$pop!=c("**S")),] tlr<-tlr[which(tlr$pop!=c("**H")),] tlr<-tlr[which(tlr$pop!=c("**NT")),] tlr<-tlr[which(tlr$pop!=c("NT")),] tlr<-tlr[which(tlr$pop!=c("*N")),] ggplot(tlr, aes(x=treat,y=avgexpr, fill=pop))+geom_boxplot() fit<-aov(avgexpr~pop+treat+pop:treat,data=tlr) fit ## Call: ## aov(formula = avgexpr ~ pop + treat + pop:treat, data = tlr) ## ## Terms: ## pop treat pop:treat Residuals ## Sum of Squares 1.387959e-24 3.623459e-24 1.087264e-24 1.521966e-23 ## Deg. of Freedom 2 1 2 36 ## ## Residual standard error: 6.502063e-13 ## Estimated effects may be unbalanced TukeyHSD(fit) ## Tukey multiple comparisons of means ## 95% family-wise confidence level ## ## Fit: aov(formula = avgexpr ~ pop + treat + pop:treat, data = tlr) ## ##$pop ## diff lwr upr p adj ## N-H 1.871523e-13 -4.283799e-13 8.026845e-13 0.7396288 ## S-H 4.499808e-13 -1.655514e-13 1.065513e-12 0.1883345 ## S-N 2.628285e-13 -3.175009e-13 8.431578e-13 0.5160357 ## ## $treat ## diff lwr upr p adj ## T-C -5.895191e-13 -9.983307e-13 -1.807076e-13 0.0059349 ## ##$pop:treat ## diff lwr upr p adj ## N:C-H:C 2.035564e-13 -8.088703e-13 1.215983e-12 0.9900053 ## S:C-H:C 7.815453e-13 -2.308813e-13 1.793972e-12 0.2116836 ## H:T-H:C -3.495711e-13 -1.495001e-12 7.958589e-13 0.9392588 ## N:T-H:C -1.437124e-13 -1.189342e-12 9.019174e-13 0.9983185 ## S:T-H:C -2.410672e-13 -1.286697e-12 8.045626e-13 0.9815165 ## S:C-N:C 5.779889e-13 -4.001082e-13 1.556086e-12 0.4920724 ## H:T-N:C -5.531275e-13 -1.668330e-12 5.620748e-13 0.6712284 ## N:T-N:C -3.472688e-13 -1.359695e-12 6.651579e-13 0.9039812 ## S:T-N:C -4.446237e-13 -1.457050e-12 5.678030e-13 0.7714929 ## H:T-S:C -1.131116e-12 -2.246319e-12 -1.591417e-14 0.0451756 ## N:T-S:C -9.252577e-13 -1.937684e-12 8.716899e-14 0.0898974 ## S:T-S:C -1.022613e-12 -2.035039e-12 -1.018588e-14 0.0465536 ## N:T-H:T 2.058587e-13 -9.395713e-13 1.351289e-12 0.9940347 ## S:T-H:T 1.085039e-13 -1.036926e-12 1.253934e-12 0.9997234 ## S:T-N:T -9.735487e-14 -1.142985e-12 9.482749e-13 0.9997459 The graph represents boxplots generated from the average expression value of both TLR reps. The darkened line in the box represents the median value, the box equals the second and third quartiles, the lines are the 1st and 4th quartiles. Dots are data outliers from the data set. There appears to be a difference between Dabob treated group and the Oyster Bay control, it also appears there is a difference between Oyster Bay control and Oyster Bay treatment group. The statistics are broken into groups of population ($pop), treatment ($treat), and population by treatment (\$'pop:treat'). The statistics show that the populations are not significantly different from one another, but the treatment/control are. They also show that the Dabob treatment is significantly different than the Oyster Bay control. The Oyster Bay treatment was also significantly different from the Oyster Bay control.
2017-11-18 11:49:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34813010692596436, "perplexity": 4172.805214228462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804881.4/warc/CC-MAIN-20171118113721-20171118133721-00137.warc.gz"}
https://thecareerlabs.com/free-gre-prep/gre-scores
The Graduate Record Examination or, as we commonly call it, the GRE is an examination designed for candidates who want to apply for graduate programmes. As you might be aware, GRE is a globally accepted, computer-based or paper-based multiple-choice examination, which consists of three sections— Verbal Reasoning, Quantitative Reasoning, and Analytical writing. In the following sections, we will explore how GRE scores work, what might be considered a good score. We will also outline the validity of these scores and some tips to help you understand the scoring scheme. GRE scores and the scoring scheme might seem complex and might intimidate your attempt of trying to decode them. But a closer look at understanding the scores will make you realise that it  is a simple task after all. It is essential to understand the scores before we can dive into how the calculation works. The scores are divided according to the three components of the GRE exam. The division is as follows: 1. Verbal Reasoning – 130-170 2. Quantitative Reasoning – 130-170 3. Analytical Writing – 0-6 While Verbal and Quantitative Reasoning scores increase by a point one increment, the Analytical Writing score increases by a half point increment. #### GRE Scoring Scheme The GRE score is divided into two components. The division is based on how the GRE is calculated. The first method is measuring scores on a scale from 130-170, 130 being the lowest and 170 being the highest. The second method is calculating the score into percentile to see how you rank as compared to the other candidates. The GRE exam is a competitive exam and is designed to handpick the best from the lot. Hence, the percentile calculation is preferred as it lets you know where you how well you have fared as compared to fellow candidates and where you stand amongst the overall ranking. The percentile rank denotes the number of candidates who have scored less than you. For example, if you have a 80th percentile rank, it means you have scored better than 80 percent of the candidates who have appeared for the GRE examination. The highest total score or a GRE full score is 340, whereas the lowest total score is 260. If you score a 170 in either verbal reasoning or quantitative reasoning, your percentile rank would be 99. A score between 320- 340 would count as an excellent score as your rank would lie between 90-99. Therefore, your rank would be amongst the top ten percentile. A good score would be any score above 300 or 300 itself, this would rank you anywhere between 80-90 percentile. An average score would be a 150 in each section (Verbal reasoning and Quantitative reasoning), and your percentile rank would range from 50 to 55. We have so far discussed the first two sections of the GRE sections and their scoring pattern, we now move onto the third section, the Analytical Writing Assessment. The scoring pattern for the Analytical writing assessment is slightly different. The scaling score is from one to six with the score increasing by half a point increment. If you score a 6, your percentile rank is around 99, and similarly if you score a 4, your percentile rank is around 50. The Verbal Reasoning and Quantitative Reasoning sections are easier to score in as they are objective-based questions. The Analytical Writing Assessment, on the other hand, proves to be difficult to score in, as it is highly subjective and individualised. Are you then wondering, what might count as a good score? We will explore that in the next section. 13 Created on GRE Scores Evaluate your GRE knowledge by attempting this Quiz. 1 / 3 Lisa and Ode went to a gift shop for shopping. Lisa purchased gift for $10 and Ode didn’t buy anything as she didn’t find anything interesting. Before shopping, Lisa and Ode between themselves had$150 and if after shopping Lisa had 50% more than what Ode had, then how much did Lisa had initially before shopping? 2 / 3 Oceanographer: Many activists blame fishermen alone for the decline in the number of edible Krill Fish in the past 10 years. Yet clearly, blue whales also have played an important role in this decline. In the past ten years, the number of blue whales in the territorial oceans has increased sharply and the examination of dead blue whale has showed that a number of them have recently fed on Krill Fish. In the oceanographer’s argument, the portion in boldface plays which of the following roles? 3 / 3 Despite his playful & nonchalant facade , Mr Thomas  Brown is  _________________ entrepreneur who is aware  of how to market himself. The average score is 25% 0% #### Good Score: A myth or reality? You must be wondering, what is a good GRE score? A good score purely depends on the criterion set by the Universities you plan to apply for. These criteria differ from each University and from individual graduate programmes within the Universities as well. Thus, it is important to make note and do a thorough research of the criteria before you begin your preparation for the exam. Preparing for any kind of examination takes a toll on your mental and physical health, hence it is advisable to keep a realistic goal or target score in mind and move towards this goal with the help of practice tests. If you follow through with your goals and target score persistently, a good score becomes an achievable reality. #### Validity of scores Once you receive your GRE scores, the validity of these scores last for five years from the date of your examination. If you are not satisfied with your scores, you  also have the option of retaking the GRE. Although, that might prove to be an expensive affair. Hence, it is recommended that you perform well in the first attempt itself. Lastly, most candidates aim for an excellent or an above average score, which can prove to be very stressful. A way to beat the stress is to set personal goals and targets and adhere to them while tracking your progress on a daily basis. Thus, it is imperative to have all the right information before you begin the preparation process. As we have reached the end of this article, you have all the information necessary at your fingertips. In the beginning, we mentioned  how the GRE scoring scheme might intimidate you along the way, but when broken down into fragments of information like we just did, it is not as complex as you might think! #### FAQs 1. Is it possible to achieve my GRE target score? Yes, you can achieve your target GRE score as long as you make a good study plan and stick to it diligently. 1. What is the maximum possible GRE score? The maximum possible GRE score is 340. 1. Which sections contribute to the composite GRE score? While the GRE score out of 340 is calculated from your scores in the Verbal Reasoning and Quantitative Reasoning sections, the Analytical Writing Assessment section is equally important and you should not neglect it. X
2022-11-29 14:36:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3996149003505707, "perplexity": 1207.2302823152304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00169.warc.gz"}
https://matheducators.stackexchange.com/tags/students-mistakes/hot
# Tag Info Accepted ### Should students be told they're wrong All four of your options lead with "They are told..." Consider asking the student questions instead. At the very least, this shows interest, and they may end up catching their own mistakes as they try ... • 606 Accepted ### How does a teacher come up with plausible wrong answers for multiple choice tests? I freelance as an item writer, someone who writes questions for standardized tests. When making up alternate choices, I always have to justify my reasons for the "wrong answers" or distractors. Here ... • 7,161 ### Misuse of parentheses for multiplication It is actually wrong to say that parenthesis means multiplication. In $(2)(5)$ it is the lack of an operator between the parenthesis that implies multiplication, NOT the parenthesis. The parenthesis ... • 401 Accepted ### Near-universal student mistake on $\lim_{x\rightarrow\infty}e^{x+1}/e^x$ While all of your students at this point will have done (extensive) units on manipulating and simplifying expressions with exponents, this is the limits unit. When doing limits questions, most ... • 4,900 Accepted ### Grating mathematical phrases---How to correct? Personally, I don't think we attend to this sufficiently in lower-level mathematics (where it's actually needed most). Students need that vocabulary to interface with books, future teachers, tutors, ... • 21.3k ### How should a student's inefficient calculation be pointed out? Foremost: It depends on what the lead-in lesson/topic/direction was. If this was the essential point being exercised, then I would interrupt ASAP and refocus them on the lesson/direction that just ... • 21.3k ### How to explain what's wrong with this application of the chain rule? The root of the difficulty is that $x$ appears free in $f(z)$, but we are trying to "capture" it with $g(x)$, which is illegal. When we substitute $g(x)$ into $f(g(x))$, we have a variable clash: f(... • 556 Accepted ### Quote to show students don't have to fear making mistakes Johann Wolfgang von Goethe: "By seeking and blundering we learn." Original German, 1825. Albert Einstein: "Anyone who has never made a mistake has never tried anything new." (However, attribution to ... • 28.4k ### teach that $\frac10$ not defined properly What is $\frac 1 a$? It is the unique (real) number such that $a\cdot \frac 1 a=1$. Does there exist a real number that multiplied by $0$ gives $1$? No. Why is this? Because if $0\cdot b=0$ which ever ... • 1,667 ### How does a teacher come up with plausible wrong answers for multiple choice tests? If a teacher has taught the course before, and has asked questions that are free-response (not multiple-choice), then the teacher can look at the incorrect answers previously given by the students. ... • 10.4k Accepted ### How do you coach students who often make small errors? Ask the student to "talk through" their calculations Having a student verbalize their calculation may force them to pay more attention (or a different kind of attention) to their work that ... • 4,468 Accepted ### Metonymy in mathematics Metonymy and its relatives, metaphor, polysemy, synecdoche occur all over the place in mathematical writing, and sometimes cause students problems and sometimes don't, because those thought processes ... ### A Series of Unfortunate Examples! Personally, I refer to this phenomenon as students "submarining" a broken understanding on a particular kind of problem. Example #1: Our in-house elementary algebra textbook, in its first edition, ... • 21.3k Accepted ### Misuse of parentheses for multiplication To answer the ultimate question ("Can anybody explain where this writing tradition comes from?"): It's explicitly taught that way by many U.S. instructors and textbooks. Examples: From the otherwise ... • 21.3k Accepted ### How should a student's inefficient calculation be pointed out? I like your second option the best: ...wait for them to finish the calculation, or even finish the entire exercise, before I casually tell them there was a more natural way to work out that part? ... • 7,938 ### Multiple students writing $y\frac{d}{dx}$ rather than $\frac{d}{dx}y$ -- why? When a student writes incorrect notation, ask them to read it out loud. I would say something like: Something here doesn't look right, but we can fix it. Could you read this work out loud? I think ... • 19.3k ### Why do students only see the last term of a sum abbreviated with an ellipsis? I suspect that the issue is not so much the ellipsis per se but a problem with notation in general, and in particular with the correct use of the equals sign. At the risk of repeating what I wrote in ... • 16.7k ### How can a teacher help a student who has internalized mistakes? Coach them through doing it the right way. Have them repeat it the right way, several times. In front of you. And go very easy, including repeats. Gradually relax the guardrails and keep drilling. ... • 199 Accepted ### Students problems with reasoning, not exactly math You can say that this is "just reasoning", but the truth is that this is a specific application of basic logic, in particular the implication (if/then) relation. I have a colleague with a ... • 21.3k ### Misuse of parentheses for multiplication I disagree that it is "terribly harmful". Do not prevent them from writing $(2)(5)$. Instead prevent them from writing things that are actually wrong. Thinking that $\sin x$ is $\sin$ times $x$ ... • 6,526 ### How to explain that the sums of numerators over sums of denominators isn't the same as the mean of ratios? One observation is that (sum of numerators) divided by (sum of denominators) is not well defined. For example, let's work with the two ratios $a=\frac01$ and $b=\frac11$. The ratio of the sum of ... • 8,133 ### Is this just a mistake or a more serious misconception? It seems clear that there is a certain conceptual gap in the student's understanding. My suspicion is that the student is essentially running the following program in his mind: Initialize factorial =... • 2,561 ### Near-universal student mistake on $\lim_{x\rightarrow\infty}e^{x+1}/e^x$ First of all, I think that the problem statement can be confusing. use L'Hospital's rule if possible, or if not, explain why it didn't work and evaluate it by some other method It can be ... • 261 ### Why does the widespread erroneous definition of "irrational number" persist without being taught? I can think of two related reasons: The characterization via the decimal expansion might be perceived more strongly like a property of the number: "This number is irrational, because this number's ... • 7,582 ### A parabolic arc is not semicircular. But students think so This is pretty natural, I think. People understand things in terms of things that they already know, and while Calc 1/2/3 students should theoretically have a reasonably well developed 'catalog' of ... • 4,900 ### Mnemonics for some properties in mathematics Recently, a student in my beginning algebra course offered the following to the class, regarding signed number multiplication: Assuming positivity is like love, and negativity is like hate, then... "... • 7,938 ### Should students be told they're wrong This is obviously a subjective topic, but here's my take: As an educator, you should see yourself as a resource to your students. You have certain knowledge that they seek to obtain. You should never ... • 249
2022-11-30 14:50:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6430307030677795, "perplexity": 1683.4531796019733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00761.warc.gz"}
http://gnu-make.2324884.n4.nabble.com/Backslash-handling-not-POSIX-compliant-td20313.html
# Backslash handling not POSIX-compliant 4 messages Open this post in threaded view | ## Backslash handling not POSIX-compliant Doubling the backslash suppresses special newline handling in GNU make. I don’t see anything in the standard that allows this. It defines an escaped newline as one preceded by a backslash and doesn’t say anything about backslashes being treated specially otherwise. Test case: >.POSIX: >test: > echo \\ > true Expected result: the command line >echo \\ >true being run. >\ >true being written to the standard output. Actual result: two command lines >echo \\ and >true being run. >\ being written to the standard output. Open this post in threaded view | ## Re: Backslash handling not POSIX-compliant On Tue, 2020-07-28 at 08:07 +0300, Ivan Kozlov wrote: > Doubling the backslash suppresses special newline handling in GNU > make. I don’t see anything in the standard that allows this. It > defines an escaped newline as one preceded by a backslash and doesn’t > say anything about backslashes being treated specially otherwise. It is documented behavior in the GNU make manual so I'm not going to change this for standard makefiles. However, we could consider making it work differently for makefiles where .POSIX: is set.  I am not really excited about doing that but we can consider it. My test case is actually wrong. The standard output would be the same in both cases. Here is a proper test case: >.POSIX: >test: > A=\\ > echo A The expected result is >\ being written to the standard output. The actual result is an empty line being written. Another example is: >.POSIX: >test: > echo "a \\ > b" Which fails instead of printing >a \ >b The expected behaviour is useful because it allows portably quoting macros with here-documents, for example: > sed '$s:\\$::' <<\end; : \\ > $V\ > end should print the literal value of the macro$V that can contain single quotes and special characters. I believe there is no other way to achieve this with POSIX make. Ivan Kozlov (28 July 2020 16:18) wrote > The expected behaviour is useful because it allows portably quoting > macros with here-documents, for example: > > sed '$s:\\$::' <<\end; : \\ > > $V\ > > end > should print the literal value of the macro$V that can contain single > quotes and special characters. I believe there is no other way to > achieve this with POSIX make. Putting quotes round the here-tag has the same effect as escaping it: $V=argle$ cat < some text $V > EOF some text argle$ cat <<"EOF" > some text $V > EOF some text$V $cat <<'EOF' > some text$V > EOF some text $V$ cat <<\EOF > some text $V > EOF some text$V I find this works in both dash and bash. Not sure how that maps out for the use within make, though.         Eddy.
2020-10-30 05:29:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9915674924850464, "perplexity": 4837.9725296797415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00132.warc.gz"}
http://www.solutioninn.com/the-following-information-describes-a-companys-usage-of-direct-labor
# Question The following information describes a company’s usage of direct labor in a recent period. Compute the direct labor rate and efficiency variances for the period. Actual direct labor hours used . . . . . . . . . . . . . . . . . . . . . . 65,000 Actual direct labor rate per hour . . . . . . . . . . . . . . . . . . . . \$ 15 Standard direct labor rate per hour . . . . . . . . . . . . . . . . . . \$ 14 Standard direct labor hours for units produced . . . . . . . . . 67,000 Sales1 Views126
2016-10-28 01:09:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890519976615906, "perplexity": 133.52347896062986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00343-ip-10-171-6-4.ec2.internal.warc.gz"}
https://wiki.haskell.org/index.php?title=Prime_numbers&oldid=42488
# Prime numbers In mathematics, a prime number (or a prime) is a natural number which has exactly two distinct natural number divisors: 1 and itself. The smallest prime is thus 2. Any natural number is representable as a product of powers of its prime factors, and so a prime has no prime divisors other than itself. That means that starting with 2, for each newly found prime we can eliminate from the rest of the numbers all such numbers that have this prime as their divisor, giving us the next available number as next prime. This is known as sieving the natural numbers, so that in the end what we are left with are just primes. To eliminate a prime's multiples from the result we can either a. plainly test each new candidate number for divisibility by that prime with a direct test, giving rise to a kind of "trial division" algorithm; or b. we can find out multiples of a prime p by counting up from it by p numbers at a time, resulting in a variant of a "genuine sieve" as it was reportedly originally conceived by Eratosthenes in ancient Greece. Having a direct-access mutable arrays indeed enables easy marking off of these multiples on pre-allocated array as it is usually done in imperative languages; but to get an efficient list-based code we have to be smart about combining those streams of multiples of each prime - which gives us also the memory efficiency in generating the results one by one. ## Prime Number Resources • At Wikipedia: • HackageDB packages: • arithmoi: Various basic number theoretic functions; efficient array-based sieves, Montgomery curve factorisation ... • Numbers: An assortment of number theoretic functions. • NumberSieves: Number Theoretic Sieves: primes, factorization, and Euler's Totient. • primes: Efficient, purely functional generation of prime numbers. • Papers: • O'Neill, Melissa E., "The Genuine Sieve of Eratosthenes", Journal of Functional Programming, Published online by Cambridge University Press 9 October 2008 doi:10.1017/S0956796808007004. ## Sieve of Eratosthenes Sieve of Eratosthenes is genuinely represented by -- genuine yet wasteful sieve of Eratosthenes primesTo m = 2 : eratos [3,5..m] where eratos [] = [] eratos (p:xs) = p : eratos (xs minus [p,p+2*p..m]) -- eulers (p:xs) = p : eulers (xs minus map (p*)(p:xs)) -- turner (p:xs) = p : turner [x | x<-xs, rem x p /= 0] This should be regarded more like a specification, not a code. It is extremely slow, running at empirical time complexities worse than quadratic in number of primes produced. But it has the core defining features of S. of E. as a. being bounded, i.e. having a top limit value, and b. finding out the multiples of a prime by counting up from it. Yes, this is exactly how Eratosthenes defined it (Nicomachus, Introduction to Arithmetic, I, pp. 13, 31). The canonical list-difference minus and duplicates-removing list-union union functions dealing with ordered increasing lists - infinite as well as finite - are simple enough to define (using compare has an effect of comparing the values only once, unlike when using (<) etc): -- ordered lists, difference and union minus (x:xs) (y:ys) = case (compare x y) of LT -> x : minus xs (y:ys) EQ -> minus xs ys GT -> minus (x:xs) ys minus xs _ = xs union (x:xs) (y:ys) = case (compare x y) of LT -> x : union xs (y:ys) EQ -> x : union xs ys GT -> y : union (x:xs) ys union xs ys = xs ++ ys (the name merge ought to be reserved for duplicates-preserving merging as used by mergesort - that's why we use union here, following Leon P.Smith's Data.List.Ordered package). ### Analysis So what does it do, this sieve code? For each found prime it removes its odd multiples from further consideration. It finds them by counting up in steps of 2p. There are thus $O(m/p)$ multiples for each prime, and $O(m \log\log(m))$ multiples total, with duplicates, by virtues of prime harmonic series, where $\sum_{p_i. If each multiple is dealt with in $O(1)$ time, this will translate into $O(m \log \log(m))$ RAM machine operations (since we consider addition as an atomic operation). Indeed, mutable random-access arrays allow for that. Melissa O'Neill's article's stated goal was to show that so does efficient Priority Queue implementation in Haskell as well. But lists in Haskell are sequential, not random-access, and complexity of minus(a,b) is about $O(|a \cup b|)$, i.e. here it is $O(|a|)$ which is $O(m/\log(p))$ according to the remark about the Φ-function in Melissa O'Neill's article. It looks like $\sum_{i=1}^{k}{1/log(p_i)} = O(k/\log(k))$. Since the number of primes below m is $n = \pi(m) = O(m/\log(m))$ by the prime number theorem (where $\pi(m)$ is a prime counting function), there will be k = n multiples-removing steps in the algorithm; it means total complexity of $O(m k/\log(k)) = O(m^2/(\log(m))^2)$, or $O(n^2)$ in n primes produced - much much worse than the optimal $O(n \log(n) \log\log(n))$. ### From Squares But we can start each step at the prime's square, as its smaller multiples will be already processed on previous steps. This means we can stop early, when the prime's square reaches the top value m, and thus cut the total number of steps to around $k = \pi(\sqrt{m}) = O(2\sqrt{m}/\log(m))$. This does not in fact change the complexity of random-access code, but for lists it makes it $O(m^{1.5}/(\log m)^2)$, or $O(n^{1.5}/\sqrt{\log n})$ in n primes produced, showing an enormous speedup in practice: primesToQ m = 2 : sieve [3,5..m] where sieve [] = [] sieve (p:xs) = p : sieve (xs minus [p*p,p*p+2*p..m]) Its empirical complexity is about $O(n^{1.45})$. This simple optimization of starting from a prime's square has dramatic effect here because our formulation is bounded, in accordance with the original algorithm. This has the desired effect of stopping early and thus preventing the creation of all the extraneous multiples streams which start beyond the upper bound anyway, turning the unacceptably slow initial specification into a code yet-far-from-optimal and slow, but acceptably so, striking a good balance between clarity, succinctness and efficiency. ### Guarded This ought to be explicated (improving on clarity though not on time complexity) as in the following, for which it is indeed a minor optimization whether to start from p or p*p - but only after we've went to the necessary trouble of explicitly stopping as soon as possible: primesToG m = 2 : sieve [3,5..m] where sieve (p:xs) | p*p > m = p : xs | True = p : sieve (xs minus [p*p,p*p+2*p..m]) It is now clear that it can't be made unbounded just by abolishing the upper bound m, because the guard can not be simply omitted without changing the complexity back for the worst. ### Accumulating Array So while minus(a,b) takes $O(|b|)$ operations for random-access imperative arrays and about $O(|a|)$ operations for lists here, using Haskell's immutable array for a one could expect the array update time to be indeed $O(|b|)$ but it looks like it's not so: primesToA m = sieve 3 $array (3,m) [(i,odd i) | i<-[3..m]] where sieve p a | p*p>m = 2 : [i | (i,True) <- assocs a] | a!p = sieve (p+2)$ a//[(i,False) | i <- [p*p,p*p+2*p..m]] | True = sieve (p+2) a It's much slower than the above, though it should be running at optimal complexity on implementations which are able to properly use the destructive update here, for an array being passed along as an accumulating parameter, it seems. How this implementation deficiency is to be overcome? One way is to use explicitly mutable monadic arrays (see below), but we can also think about it a little bit more on the functional side of things. ### Postponed Going back to guarded Eratosthenes, first we notice that though it works with minimal number of prime multiples streams, it still starts working with each a bit prematurely. Fixing this with explicit synchronization won't change complexity but will speed it up some more: primesPE () = 2 : primes' where primes' = sieve [3,5..] primes' 9 sieve (p:xs) ps@ ~(_:t) q | p < q = p : sieve xs ps q | True = sieve (xs minus [q,q+2*p..]) t (head t^2) Since the removal of a prime's multiples here starts at the right moment, and not just from the right place, the code could now finally be made unbounded. Because no multiples-removal process is started prematurely, there are no extraneous multiples streams, which were the reason for the extreme wastefulness and thus inefficiency of the original formulation. ### Segmented With work done segment-wise between the successive squares of primes it becomes primesSE () = 2 : primes' where primes' = 3 : sieve primes' 5 9 [] sieve (p:ps) x q fs = foldr (flip minus) [x,x+2..q-2] [[y+s,y+2*s..q] | (s,y) <- fs] ++ sieve ps (q+2) (head ps^2) ((2*p,q):[(s,q-rem (q-y) s) | (s,y) <- fs]) This "marks" the odd composites in a given range by generating them - just as a person performing the original sieve of Eratosthenes would do, counting one by one the multiples of the relevant primes. These composites are independently generated so some will be generated multiple times. Rearranging the chain of subtractions into a subtraction of merged streams (see below) and using tree-like folding further speeds up the code and improves its asymptotic behavior. The advantage in working with spans explicitly is that this code is easily amendable to using arrays for the composites marking and removal on each finite span; and memory usage can be kept in check by using fixed sized segments. ### Linear merging But segmentation doesn't add anything substantially, and each multiples stream starts at its prime's square anyway. What does the Postponed code do, operationally? For each prime's square passed over, it builds up a nested linear left-deepening structure, (...((xs-a)-b)-...), where xs is the original odds-producing [3,5..] list, so that each odd it produces must go through minus nodes on its way up - and those odd numbers that eventually emerge on top are prime. Thinking a bit about it, wouldn't another, right-deepening structure, (xs-(a+(b+...))), be better? This idea is due to Richard Bird (in the code presented in Melissa O'Neill's article). Here, xs would stay near the top, and more frequently odds-producing streams of multiples of smaller primes would be above those of the bigger primes, that produce less frequently their candidates which have to pass through more union nodes on their way up. Plus, no explicit synchronization is necessary anymore because the produced multiples of a prime start at its square anyway - just some care has to be taken to avoid a runaway access to the infinitely-defined structure (specifically, if each (+) operation passes along unconditionally its left child's head value before polling the right child's head value, then we are safe). Here's the code, faster yet but still with about same time complexity of $O(n^{1.4})$: {-# OPTIONS_GHC -O2 -fno-cse #-} primesLME () = 2 : ([3,5..] minus join [[p*p,p*p+2*p..] | p <- primes']) where primes' = 3 : ([5,7..] minus join [[p*p,p*p+2*p..] | p <- primes']) join ((x:xs):t) = x : union xs (join t) The double primes feed is introduced here to prevent unneeded memoization and thus prevent memory leak, as per Melissa O'Neill's code, and is dependent on no expression sharing being performed by a compiler. ### Tree merging Moreover, it can be changed into a tree structure. This idea is due to Dave Bayer on haskell-cafe mailing list (though in more complex formulation, its radical simplification due to Will Ness): {-# OPTIONS_GHC -O2 -fno-cse #-} primesTME () = 2 : ([3,5..] minus join [[p*p,p*p+2*p..] | p <- primes']) where primes' = 3 : ([5,7..] minus join [[p*p,p*p+2*p..] | p <- primes']) join ((x:xs):t) = x : union xs (join (pairs t)) pairs ((x:xs):ys:t) = (x : union xs ys) : pairs t It is very fast, running at speeds and empirical complexities comparable with the code from Melissa O'Neill's article (about $O(n^{1.2})$ in number of primes n produced). For esthetic purposes the above can be rewritten as follows, using explicated infinite tree-like folding: primes = 2 : g (fix g) where g xs = 3 : gaps 5 (foldi (\(x:xs) ys -> x:union xs ys) [] [[x*x, x*x+2*x..] | x <- xs]) fix g = xs where xs = g xs gaps k s@(x:xs) | k<x = k:gaps (k+2) s -- equivalent to | True = gaps (k+2) xs -- [k,k+2..]minuss, k<=x ### Tree merging with Wheel Wheel factorization optimization can be further applied, and another tree structure can be used which is better adjusted for the primes multiples production (effecting about 5%-10% at the top of a total 2.5x speedup w.r.t. the above tree merging on odds only - though complexity stays the same): {-# OPTIONS_GHC -O2 -fno-cse #-} primesTMWE () = 2:3:5:7: gaps 11 wheel (join $roll 11 wheel primes') where primes' = 11: gaps 13 (tail wheel) (join$ roll 11 wheel primes') join ((x:xs): ~(ys:zs:t)) = x : union xs (union ys zs) union join (pairs t) pairs ((x:xs):ys:t) = (x : union xs ys) : pairs t gaps k ws@(w:t) cs@(c:u) | k==c = gaps (k+w) t u | True = k : gaps (k+w) t cs roll k ws@(w:t) ps@(p:u) | k==p = scanl (\c d->c+p*d) (p*p) ws : roll (k+w) t u | True = roll (k+w) t ps wheel = 2:4:2:4:6:2:6:4:2:4:6:6:2:6:4:2:6:4:6:8:4:2:4:2: 4:8:6:4:6:2:4:6:2:6:6:4:2:4:6:2:6:4:2:4:2:10:2:10:wheel #### Above Limit Another task is to produce primes above a given number (not having to find out their ordinal numbers). {-# OPTIONS_GHC -O2 -fno-cse #-} primesFromTMWE a0 = (if a0 <= 2 then [2] else []) ++ (gaps a wh' $compositesFrom a) where (a,wh') = rollFrom (snap (max 3 a0) 3 2) (h,p':t) = span (< z) primes' -- p < z => p*p<=a z = ceiling$ sqrt $fromIntegral a + 1 -- p'>=z => p'*p'>a compositesFrom a = foldi union' [] (foldi union [] [multsOf p a | p <- h++[p']] : [multsOf p (p*p) | p <- t]) primes' = gaps 11 wheel (foldi union' [] [multsOf p (p*p) | p <- primes'']) primes'' = 11: gaps 13 (tail wheel) (foldi union' [] [multsOf p (p*p) | p <- primes'']) union' (x:xs) ys = x : union xs ys multsOf p from = scanl (\c d->c+p*d) (p*x) wh -- (p*)<$> where -- scanl (+) x wh (x,wh) = rollFrom (snap from p (2*p) div p) gaps k ws@(w:t) cs@(c:u) | k==c = gaps (k+w) t u | True = k : gaps (k+w) t cs snap v origin step = if r==0 then v else v+(step-r) where r = mod (v-origin) step wheelNums = scanl (+) 0 wheel wheel = 2:4:2:4:6:2:6:4:2:4:6:6:2:6:4:2:6:4:6:8:4:2:4:2: 4:8:6:4:6:2:4:6:2:6:6:4:2:4:6:2:6:4:2:4:2:10:2:10:wheel rollFrom n = go wheelNums wheel where m = (n-11) mod 210 go (x:xs) ws@(w:ws') | x < m = go xs ws' | True = (n+x-m, ws) -- (x >= m) This uses the infinite-tree folding foldi with plain union where heads are unordered, and back with union' above that. A certain preprocessing delay makes it worthwhile when producing more than just a few primes. ## Turner's sieve - Trial division David Turner's original 1975 formulation (SASL Language Manual, 1975) replaces non-standard minus in the sieve of Eratosthenes by stock list comprehension with rem filtering, turning it into a kind of trial division algorithm: -- unbounded sieve, premature filters primesT () = 2 : sieve [3,5..] where sieve (p:xs) = p : sieve [x | x<-xs, rem x p /= 0] -- filter ((/=0).(remp)) xs This creates an immense number of superfluous implicit filters in extremely premature fashion. To be admitted as prime, each number will be tested for divisibility here by all its preceding primes potentially, while just those not greater than its square root would suffice. To find e.g. the 1001st prime (7927), 1000 filters are used, when in fact just the first 24 are needed (up to 89's filter only). Operational overhead is enormous here. ### Guarded Filters When we force ourselves away from the Quest for a Mythical One-Liner it really ought to be written at least as bounded and guarded variety (if not abandoned right away in favor of algorithmically superior sieve of Eratosthenes), yet again achieving the miraculous complexity improvement from above quadratic to about $O(n^{1.45})$ empirically (in n primes produced): primesToGT m = 2 : sieve [3,5..m] where sieve (p:xs) | p*p > m = p : xs | True = p : sieve [x | x<-xs, rem x p /= 0] -- filter ((/=0).(remp)) xs ### Postponed Filters or better yet as unbounded, postponed variety: primesPT () = 2 : primes' where primes' = sieve [3,5..] primes' 9 sieve (p:xs) ps@ ~(_:t) q | p < q = p : sieve xs ps q | True = sieve [x | x<-xs, rem x p /= 0] t (head t^2) -- filter ((/=0).(remp)) xs creating here as well the linear nested filters structure at run-time, (...(([3,5..] |> filterBy 3) |> filterBy 5)...) (with |> defined as x |> f = f x), each filter started at its proper moment. ### Optimal trial divison The above is equivalent to the traditional formulation of trial division, isPrime primes n = foldr (\p r -> p*p > n || (rem n p /= 0 && r)) True primes primes = 2 : filter (isPrime primes) [3..] except that this one is rechecking for each candidate which primes to use, which will be the same prefix of the primes list being built, for all the candidate numbers in the ever increasing spans between the successive primes squares. ### Segmented Generate and Test This primes prefix's length can be explicitly maintained, achieving a certain further speedup (though not in complexity which stays the same) by turning a list of filters into one filter by an explicit list of primes: primesST () = 2 : primes' where primes' = 3 : sieve 0 5 9 (tail primes') sieve k x q ps = let fs = take k primes' in [n | n <- [x,x+2..q-2], and [rem n f/=0 | f <- fs]] -- filter (\n-> all ((/=0).(rem n)) fs) [x,x+2..q-2] ++ sieve (k+1) (q+2) (head ps^2) (tail ps) This seems to eliminate most recalculations, explicitly filtering composites out from batches of odds between the consecutive squares of primes. All these variants of course being variations of trial division – finding out primes by direct divisibility testing of every odd number by all primes below its square root potentially (instead of just by its factors, which is what direct generation of multiples is doing, essentially) – are thus of principally worse complexities than that of Sieve of Eratosthenes; but one far worse than another yet easily fixable from a wasteful monstrosity to almost acceptable performance (at least for the first few hundred thousand primes, when compiled) just by following the proper definition of the sieve as being bounded, simply with guarded formulation, instead of "heading for the hills" of using brittle implementations of complex data structures with unclear performance guarantees. #### Generate and Test Above Limit The following will start the segmented Turner sieve at the right place, using any primes list it's supplied with (e.g. TMWE etc.), demand computing it up to the square root of any prime it'll produce: primesSTFrom primes m | m>2 = sieve (length h-1) (mdiv2*2-1) (head ps^2) (tail ps) where (h,ps) = span (<= (floor.sqrt $fromIntegral m+1)) primes sieve k x q ps = let fs = take k$ tail primes in [n | n <- [x+2,x+4..q-2], and [rem n f /= 0 | f <- fs]] ++ sieve (k+1) q (head ps^2) (tail ps) This is usually faster than testing candidate numbers for divisibility one by one which has to re-fetch anew the needed prime factors to test by, for each candidate. Faster is the offset sieve of Eratosthenes on odds, and yet faster the above one, w/ wheel optimization. ### Conclusions So it really pays off to analyse the code properly instead of just labeling it "naive". BTW were this divisibility testing somehow turned into an O(1) operation, e.g. by some kind of massive parallelization, the overall complexity would drop to O(n). It's the sequentiality of testing that's the culprit. Though of course the proper multiples-removing S. of E. is a better candidate for parallelization. Did Eratosthenes himself achieve the optimal complexity? It rather seems doubtful, as he probably counted boxes in the table by 1 to go from one number to the next, as in 3,5,7,9,11,13,15, ... for he had no access even to Hindu numerals, using Greek alphabet for writing down numbers instead. Was he performing a genuine sieve of Eratosthenes then? Should faithfulness of an algorithm's implementation be judged by its performance? We'll leave that as an open question. So the initial Turner code is just a one-liner that ought to have been regarded as specification only, in the first place, not a code to be executed as such. The reason it was taught that way is probably so that it could provoke this discussion among the students. To regard it as plain executable code is what's been naive all along. ## Euler's Sieve ### Unbounded Euler's sieve With each found prime Euler's sieve removes all its multiples in advance so that at each step the list to process is guaranteed to have no multiples of any of the preceding primes in it (consists only of numbers coprime with all the preceding primes) and thus starts with the next prime: primesEU () = 2 : euler [3,5..] where euler (p:xs) = p : euler (xs minus map (p*) (p:xs)) This code is extremely inefficient, running above $O({n^{2}})$ complexity (and worsening rapidly), and should be regarded a specification only. Its memory usage is very high, with space complexity just below $O({n^{2}})$, in n primes produced. ### Wheeled list representation The situation can be somewhat improved using a different list representation, for generating lists not from a last element and an increment, but rather a last span and an increment, which entails a set of helpful equivalences: {- fromElt (x,i) = x : fromElt (x + i,i) === iterate (+ i) x [n..] === fromElt (n,1) === fromSpan ([n],1) [n,n+2..] === fromElt (n,2) === fromSpan ([n,n+2],4) -} fromSpan (xs,i) = xs ++ fromSpan (map (+ i) xs,i) {- === concat $iterate (map (+ i)) xs fromSpan (p:xt,i) === p : fromSpan (xt ++ [p + i], i) fromSpan (xs,i) minus fromSpan (ys,i) === fromSpan (xs minus ys, i) map (p*) (fromSpan (xs,i)) === fromSpan (map (p*) xs, p*i) fromSpan (xs,i) === forall (p > 0). fromSpan (concat$ take p $iterate (map (+ i)) xs, p*i) -} spanSpecs = iterate eulerStep ([2],1) eulerStep (xs@(p:_), i) = ( (tail . concat . take p . iterate (map (+ i))) xs minus map (p*) xs, p*i ) {- > mapM_ print$ take 4 spanSpecs ([2],1) ([3],2) ([5,7],6) ([7,11,13,17,19,23,29,31],30) -} Generating a list from a span specification is like rolling a wheel as its pattern gets repeated over and over again. For each span specification w@((p:_),_) produced by eulerStep, the numbers in (fromSpan w) up to ${p^2}$ are all primes too, so that eulerPrimesTo m = if m > 1 then go ([2],1) else [] where go w@((p:_), _) | m < p*p = takeWhile (<= m) (fromSpan w) | True = p : go (eulerStep w) This runs at about $O(n^{1.5..1.8})$ complexity, for n primes produced, and also suffers from a severe space leak problem (IOW its memory usage is also very high). ## Using Immutable Arrays ### Generating Segments of Primes The removal of multiples on each segment of odds can be done by actually marking them in an array instead of manipulating the ordered lists, and can be further sped up more than twice by working with odds only, represented as their offsets in segment arrays: primesSA () = 2: 3: sieve (tail primes) 3 [] where sieve (p:ps) x fs = [i*2 + x | (i,e) <- assocs a, e] ++ sieve ps (p*p) fs' where q = (p*p-x)div2 fs' = (p,0) : [(s, rem (y-q) s) | (s,y) <- fs] a = accumArray (\ b c -> False) True (1,q-1) [(i,()) | (s,y) <- fs, i <- [y+s,y+s+s..q]] Apparently, arrays are fast. The above is the fastest code of all presented so far. When run on Ideone.com it is somewhat faster than Tree Merging With Wheel in producing first few million primes, but is very much unacceptable in its memory consumption which grows faster than O(${n}$), quickly getting into tens and hundreds of MBs. ### Calculating Primes Upto a Given Value primesToNA n = 2: [i | i <- [3,5..n], ar ! i] where ar = f 5 $accumArray (\ a b -> False) True (3,n) [(i,()) | i <- [9,15..n]] f p a | q > n = a | True = if null x then a' else f (head x) a' where q = p*p a'= a // [(i,False) | i <- [q,q+2*p..n]] x = [i | i <- [p+2,p+4..n], a' ! i] ### Calculating Primes in a Given Range primesFromToA a b = (if a<3 then [2] else []) ++ [i | i <- [o,o+2..b], ar ! i] where o = max (if even a then a+1 else a) 3 r = floor.sqrt.fromInteger$ b+1 ar = accumArray (\a b-> False) True (o,b) [(i,()) | p <- [3,5..r] , let q = p*p s = 2*p (n,x) = quotRem (o - q) s q' = if o <= q then q else q + (n + signum x)*s , i <- [q',q'+s..b] ] Although using odds instead of primes, the array generation is so fast that it is very much feasible and even preferable for quick generation of some short spans of relatively big primes. ## Using Mutable Arrays Using mutable arrays is the fastest but not the most memory efficient way to calculate prime numbers in Haskell. ### Using ST Array This method implements the Sieve of Eratosthenes, similar to how you might do it in C, modified to work on odds only. It is fast, but about linear in memory consumption, allocating one (though apparently bit-packed) array for the whole sequence produced. import Control.Monad import Data.Array.ST import Data.Array.Unboxed primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray sieve where sieve = do let m = (n-1)div2 a <- newArray (1,m) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) $fromIntegral n+1 forM_ [1..srdiv2]$ \i -> do let ii = 2*i*(i+1) -- == ((2*i+1)^2-1)div2 when si $forM_ [ii,ii+i+i+1..m]$ \j -> writeArray a j False return a primesToN :: Int -> [Int] primesToN n = 2:[i*2+1 | (i,True) <- assocs . primesToNA $n] Its empirical time complexity is improving with n (number of primes produced) from $O(n^{1.25})$ through $O(n^{1.20})$ towards $O(n^{1.16})$. The reference C++ vector-based implementation exhibits this improvement in empirical time complexity too, from $O(n^{1.5})$ gradually towards $O(n^{1.12})$, where tested (which might be interpreted as evidence towards the expected quasilinearithmic $O(n \log(n)\log(\log n))$ time complexity). ### Bitwise prime sieve with Template Haskell Count the number of prime below a given 'n'. Shows fast bitwise arrays, and an example of Template Haskell to defeat your enemies. {-# OPTIONS -O2 -optc-O -XBangPatterns #-} module Primes (nthPrime) where import Control.Monad.ST import Data.Array.ST import Data.Array.Base import System import Control.Monad import Data.Bits nthPrime :: Int -> Int nthPrime n = runST (sieve n) sieve n = do a <- newArray (3,n) True :: ST s (STUArray s Int Bool) let cutoff = truncate (sqrt$ fromIntegral n) + 1 go a n cutoff 3 1 go !a !m cutoff !n !c | n >= m = return c | otherwise = do if e then if n < cutoff then let loop !j | j < m = do when x $unsafeWrite a j False loop (j+n) | otherwise = go a m cutoff (n+2) (c+1) in loop ( if n < 46340 then n * n else n shiftL 1) else go a m cutoff (n+2) (c+1) else go a m cutoff (n+2) c And place in a module: {-# OPTIONS -fth #-} import Primes main = print$( let x = nthPrime 10000000 in [| x |] ) Run as: $ghc --make -o primes Main.hs$ time ./primes 664579 ./primes 0.00s user 0.01s system 228% cpu 0.003 total ## Implicit Heap See Implicit Heap. ## Prime Wheels See Prime Wheels.
2021-09-28 03:35:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249363303184509, "perplexity": 10641.623898534526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00564.warc.gz"}
https://ltwork.net/water-moves-from-the-atmosphere-to-earth-s-surface-as-rain--13746962
# Water moves from the atmosphere to Earth's surface as rain, snow, and other forms of precipitation.Please select the best ###### Question: Water moves from the atmosphere to Earth's surface as rain, snow, and other forms of precipitation. Please select the best answer from the choices provided ### I'm totally lost i have a test on it tomorrow I'm totally lost i have a test on it tomorrow $I'm totally lost i have a test on it tomorrow$... ### Someone reaches out their hand to shake yours. What do you do? Wrong answers only. Someone reaches out their hand to shake yours. What do you do? Wrong answers only.... ### In a table and set of ordered pairs if _ does not repeat, then it is a function.i will give brainliest In a table and set of ordered pairs if _ does not repeat, then it is a function. i will give brainliest... ### Complete the solution of the equation. find the value of y when x equals -15. -x-by = -1 enter the correct answer. ooo done Complete the solution of the equation. find the value of y when x equals -15. -x-by = -1 enter the correct answer. ooo done [ clea all ood... ### Avery would like to include an interactive element in a blog post persuading readers to try yoga. which would most enhance Avery would like to include an interactive element in a blog post persuading readers to try yoga. which would most enhance her post? a. a quiz about the history of yoga b. a poll to find readers’ favorite yoga pose c. a question about readers’ knowledge of yoga trivia d. a request for comments ... ### Keishas teacher gives her the following information:. M, n,p and q are all integers and p =/ 0 and q=/0. A=m/p and B=n/p What conclusion can Keisha Keishas teacher gives her the following information: . M, n,p and q are all integers and p =/ 0 and q=/0 . A=m/p and B=n/p What conclusion can Keisha make? $Keishas teacher gives her the following information: . M,n,p and q are all integers and p =/ 0 an$... ### According to king what group in chico had been successful in using using boycott According to king what group in chico had been successful in using using boycott... ### Does this sentence make sense ? 'democracy is for the people.' Does this sentence make sense ? "democracy is for the people."... ### Aj, nick, howie, kevin, brian. if anybody knows what these names mean let me know. Aj, nick, howie, kevin, brian. if anybody knows what these names mean let me know.... ### We will use a video to explore the dependence of the current through the filament of an incandescent light bulb on the voltage We will use a video to explore the dependence of the current through the filament of an incandescent light bulb on the voltage applied to the light bulb. The relationships between these particular quantities is essential to the general study of electric circuits. Specifically, we will: Take measurem... ### Whose Enlightenment principles did Thomas Jefferson build upon when he drafted the Declaration of Independence? Whose Enlightenment principles did Thomas Jefferson build upon when he drafted the Declaration of Independence?... ### 1. the situation of being unable to pay debts 5 fragment 2. concentrated on the center; 1. the situation of being unable to pay debts 5 fragment 2. concentrated on the center; in government full power is granted to the leader persecute 3. religious; active in work and prayer edict 4. decree, public order, or command by an authority devout 5. piece or part broken off; shattered apart... ### A 1.2 kg mass is hanging motionless while attached to the bottom end of a spring hanging from the ceiling. You now very slowly A 1.2 kg mass is hanging motionless while attached to the bottom end of a spring hanging from the ceiling. You now very slowly pull down on the mass until your force reaches 4 N and the spring stretches down by 15 cm with the mass motionless again. You now go back to the beginning and this time push... ### Below are two parallel lines with a third line intersecting themX = ? Below are two parallel lines with a third line intersecting them X = ? $Below are two parallel lines with a third line intersecting them X = ?$... ### WILL GIVE BRAINLIEST!Ms. Sally is ordering sets of place-value blocks for the 3rd, 4th, 5th graders. She want one set for each student, WILL GIVE BRAINLIEST! Ms. Sally is ordering sets of place-value blocks for the 3rd, 4th, 5th graders. She want one set for each student, and there are 6 sets of blocks in a carton. How many cartons should Ms. Sally order? 3rd grade - 48 students 4th - 43 students 5th - 46 students 6th - 50 student... ### Solve the system of equations. Solve the system of equations. $Solve the system of equations....................$... ### Calculate the mass of nitrogen dioxide needed to create 7500 kcal of energy? Calculate the mass of nitrogen dioxide needed to create 7500 kcal of energy?... ### Why has Armand's treatment of his wife changed sosuddenly?AHe suspects that the child is not his own.BHe suspects that the Why has Armand's treatment of his wife changed so suddenly? A He suspects that the child is not his own. B He suspects that the child (and his wife) are not white. C С He realizes that their marriage is a sham. D He is ashamed because he fears he is not white.... Hi 111111111111111111111 $Hi 111111111111111111111$...
2022-09-30 00:31:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3219722807407379, "perplexity": 2216.7458281089293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00290.warc.gz"}
https://math.stackexchange.com/questions/2986538/how-to-determine-if-a-set-of-polynomials-spans-a-vector-space
# How to determine if a set of polynomials spans a vector space? I have seen similar problems on the site but when I apply those methods to my problem, my solution does not match up with the supposed correct answer shown in the image. I know that "If a set is LI and has the same dimensions as the vector space, then it is a basis of V". This set is in fact LI because when I do the method of c1v1 + c2v2 = 0, I get that c1 = c2 = 0 which means LI. Also, when I put the vectors in matrix form and find the rank, I get that the rank is two which matches the dimension of the vector space which indicates, to me, that the set spans P2. Can someone clarify the method I should be using to approach these problems with polynomials? • Your approach seems good to me. Does the program match them as incorrect? – Natalio Nov 6 '18 at 0:33 • The dimension of the space is $3.$ Please explain why you think it is $2,$ so we can explain your mistake. – saulspatz Nov 6 '18 at 0:36 • @S. Snake, are your answers the ones in the image? I thought they were. – Natalio Nov 6 '18 at 0:38 • There are two questions there. Which one are you asking about? – amd Nov 6 '18 at 0:43 0 is in the first set, so it cannot be linearly independent, since $$x_10=0$$ does not imply $$x_1=0$$. The rank of the second set is in fact two, but the dimension of the space is 3. Thus it cannot span $$P_2$$. You know that you have a three dimensional space, so you need three linearly independent vectors in your Basis. If you have four vectors or two vectors it is not a Basis. On the other hand if you have three vectors you may turn them into a matrix and check the determinant for linear independence. Rows of your matrix are coefficients of $$\{1,t,t^2\}$$ for example $$3t^2-2t+1$$ turns into $$(1,-2,3)$$
2019-11-21 21:09:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7921798229217529, "perplexity": 165.46382651882692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00037.warc.gz"}
https://plainmath.net/pre-algebra/80270-how-do-you-solve-and-graph-mstyle-di
Ellen Chang 2022-07-02 How do you solve and graph $-x-3<-5$ ? zlepljalz2 Expert Step 1 To solve, first treat the inequality like an equation. $-x-3<-5$ Add 3 to each side to isolate the variable. So you get $-x<-2$ To get rid of the negative, you must divide by -1 on both sides because x is being multiplied by -1. When dividing an inequality by a negative number, you have to flip the sign. $x>2$ Now use this equation to graph. You are trying to locate all points where the x value is greater than 2. First, find the horizontal line $x=2$ . Since the inequality is greater than and not greater than or equal to 2, $x=2$ is not a value. So this line will be dotted. (if the equation was $x\ge 2$ , the line would be solid.) You are looking for all points where x is greater than 2, so shade in the whole portion of the graph which is to the right of the dotted line. These are all the points whose x-values are greater than 2. Do you have a similar question?
2023-02-06 17:05:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5505303144454956, "perplexity": 233.32480575230565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00369.warc.gz"}
https://zenodo.org/record/3876154
Dataset Open Access # CauseNet: Towards a Causality Graph Extracted from the Web Heindorf, Stefan; Scholten, Yan; Wachsmuth, Henning; Ngonga Ngomo, Axel-Cyrille; Potthast, Martin Causal knowledge is seen as one of the key ingredients to advance artificial intelligence. Yet, few knowledge bases comprise causal knowledge to date, possibly due to significant efforts required for validation. Notwithstanding this challenge, we compile CauseNet, a large-scale knowledge base of claimed causal relations between causal concepts. By extraction from different semi- and unstructured web sources, we collect more than 11 million causal relations with an estimated extraction precision of 83% and construct the first large-scale and open-domain causality graph. We analyze the graph to gain insights about causal beliefs expressed on the web and we demonstrate its benefits in basic causal question answering. Future work may use the graph for causal reasoning, computational argumentation, multi-hop question answering, and more. When using the data, please make sure to refer to it as follows: @inproceedings{heindorf2020causenet, author = {Stefan Heindorf and Yan Scholten and Henning Wachsmuth and Axel-Cyrille Ngonga Ngomo and Martin Potthast}, title = {CauseNet: Towards a Causality Graph Extracted from the Web}, booktitle = {{CIKM}}, pages = {3023--3030}, publisher = {{ACM}}, year = {2020} } Files (2.0 GB) Name Size causenet-full.jsonl.bz2 md5:78273d177c5096f89d2367a876b64645 1.8 GB causenet-precision.jsonl.bz2 md5:8bf12257e71713a63403bc8fe8bf71bf 137.9 MB causenet-sample.json md5:662cefd755f046751d1a345fe6abbdf7 55.2 kB • Stefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, and Martin Potthast. CauseNet: Towards a Causality Graph Extracted from the Web. In CIKM 2020, pages 2023-3030. ACM. 57 24 views
2021-08-03 15:23:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3409169018268585, "perplexity": 14853.733321318376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00016.warc.gz"}
https://2022.help.altair.com/2022/hwsolvers/os/topics/solvers/os/dobjref_bulk_r.htm
# DOBJREF Bulk Data Entry Defines a response and its reference values for a minmax (maxmin) optimization problem. ## Format (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) DOBJREF DOID RID SID NEGREF / LID POSREF / UID LOWFQ HIGHFQ ## Example 1 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) DOBJREF 22 3 ALL -1.0 1.0 DOBJREF 22 5 ALL -1.0 1.0 Table 1. Associated Cards (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) DRESP1 3 TOP DISP     3   488 DRESP1 5 BOTTOM DISP     3   601 ## Example 2 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) DOBJREF 23 14 ALL -1.0 1.0 Table 2. Associated Cards (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) DRESP1 3 TOP DISP     3   488 DRESP1 5 BOTTOM DISP     3   601 ## Definitions Field Contents SI Unit Example DOID Design objective identification number. (Integer > 0) RID DRESP1 or DRESP2 identification number. (Integer > 0) SID Subcase identification number. ALL (Default) If it applies to all subcases blank (Integer > 0) NEGREF/ LID NEGREF Default = -1 (Real < 0.0) Reference value for a negative response (should always be a negative real number or blank). 2 3 5 LID No default <Integer> Table identification number of a TABLEDi entry that specifies the negative reference as a function of loading frequency. 2 3 5 POSREF/ UID POSREF Default = 1.0 (Real > 0.0) Reference value for a positive response (should always be a positive real number or blank). 2 3 5 UID No default <Integer> Table identification number of a TABLEDi entry that specifies the positive reference as a function of loading frequency. 2 3 5 Default = 0.0 (Real ≥ 0.0) Default = 1.0E+20 (Real ≥ LOWFQ) 1. The same DOID can be used for multiple DOBJREF entries. If only one DOID is used, only one MINMAX=DOID entry is needed in the Subcase Information section. 2. The use of reference values allows users to set up general minmax problems involving different responses with different magnitudes. For these problems, the objective can be defined as:(1) $\mathit{Minimize} \text{max} \left({W}_{1}\left(x\right)/{r}_{1}, {W}_{2}\left(\text{x}\right)/{\text{r}}_{2},\dots {W}_{k}\left(x\right)/{r}_{k}\right)$ Or, alternatively:(2) $\mathit{Maximize} \text{max} \left({W}_{1}\left(x\right)/{r}_{1}, {W}_{2}\left(x\right)/{\text{r}}_{2},\dots {W}_{k}\left(x\right)/{r}_{k}\right)$ Where, ${W}_{k}$ Response values ${r}_{k}$ Are corresponding reference values, which can take different values depending on whether the response is positive or negative. 3. Typically, the target value or constraint value of a response can be used as its reference value. So, instead of the traditional optimization problem where there is a single objective and multiple constraints, the problem may be formulated as a minmax (maxmin) optimization, where all the responses which were previously constrained are defined as objectives and their bounds are used as reference values. This works toward pushing the maximum ratio of response versus bound value as low as possible, thus increasing the safety of the structure. 4. LOWFQ and HIGHFQ apply only to response types related to a frequency response subcase (DRESPi, RTYPE = FRDISP, FRVELO, FRACCL, FRSTRS, FRSTRN, FRFORC, FRPRES and FRERP). The reference values NEGREF and POSREF are applied only if the loading frequency falls between LOWFQ and HIGHFQ. If ATTB of DRESP1 specifies a frequency value, LOWFQ and HIGHFQ are ignored. 5. LID and UID identify a loading frequency dependent tabular input using TABLEDi. They are applied analogous to LOWFQ, HIGHFQ. 4 6. The recommended setup to define Minmax or Maxmin models which reference several responses is: • Create multiple DOBJREF entries with the same DOID • Each DOBJREF entry references one response • MINMAX or MAXMIN entry should reference the single ID corresponding to all DOBJREF entries. 7. This card is represented as a design objective reference in HyperMesh.
2023-03-30 15:10:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7989147305488586, "perplexity": 6418.880160391727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00146.warc.gz"}
https://www.jmst.org/article/2020/1005-0302/1005-0302-49-0-7.shtml
Journal of Materials Science 【-逻*辑*与-】amp; Technology, 2020, 49(0): 7-14 doi: 10.1016/j.jmst.2020.02.023 Research Article ## Evaluation on the interface characteristics, thermal conductivity, and annealing effect of a hot-forged Cu-Ti/diamond composite Lei Lei1, Yu Su1, Leandro Bolzoni, Fei Yang,* Waikato Centre for Advanced Materials and Manufacturing, School of Engineering, University of Waikato, Hamilton, 3240, New Zealand Corresponding authors: * E-mail address:fei. yang@waikato.ac.nz(F. Yang). First author contact: 1 Equal contribution. Received: 2019-11-12   Revised: 2020-01-10   Accepted: 2020-01-12   Online: 2020-07-15 Abstract A Cu-1.5 wt.%Ti/Diamond (55 vol.%) composite was fabricated by hot forging from powder mixture of copper, titanium and diamond powders at 1050 °C. A nano-thick TiC interfacial layer was formed between the diamond particle and copper matrix during forging, and it has an orientation relationship of (111)TiC//(002)Cu&[1 $\ bar {1}$ 0]TiC//[1 $\bar{1}$ 0]Cu with the copper matrix. HRTEM analysis suggests that TiC is semi-coherently bond with copper matrix, which helps reduce phonon scattering at the TiC/Cu interface and facilitates the heat transfer, further leading to the hot-forged copper/diamond composite (referred as to Cu-Ti/Dia-0) has a thermal conductivity of 410 W/mK, and this is about 74 % of theoretical thermal conductivity of hot-forged copper/composite (552 W/mK). However, the formation of thin amorphous carbon layer in diamond particle (next to the interfacial TiC layer) and deformed structure in the copper matrix have adverse effect on the thermal conductivity of Cu-Ti/Dia-0 composite. 800 °C-annealing eliminates the discrepancy in TiC interface morphology between the diamond-{100} and -{111} facets of Cu-Ti/Dia-0 composite, but causes TiC particles coarsening and agglomerating for the Cu-Ti/Dia-2 composite and interfacial layer cracking and spallation for the Cu-Ti/Dia-1 composite. In addition, a large amount of graphite was formed by titanium-induced diamond graphitization in the Cu-Ti/Dia-2 composite. All these factors deteriorate the heat transfer behavior for the annealed Cu-Ti/Dia composites. Appropriate heat treatment needs to be continually investigated to improve the thermal conductivity of hot-forged Cu-Ti/Dia composite by eliminating deformed structure in the copper matrix with limit/without impacts on the formed TiC interfacial layer. Keywords: Copper/diamond composite ; Hot forging ; Interface characteristics ; Thermal conductivity ; Heat treatment Export EndNote| Ris| Bibtex Lei Lei, Yu Su, Leandro Bolzoni, Fei Yang. Evaluation on the interface characteristics, thermal conductivity, and annealing effect of a hot-forged Cu-Ti/diamond composite. Journal of Materials Science & Technology[J], 2020, 49(0): 7-14 doi:10.1016/j.jmst.2020.02.023 ## 1. Introduction Heat dissipation is a crucial issue for high-power electronic devices due to the power output and circuit integrated level are continually increasing. Considerable efforts have been made to develop advanced heat-sink materials that are used as a chip substrate in the high integrated circuits of high-power electronic devices [[1], [2], [3], [4], [5]]. Copper/diamond composites are regarded as promising materials, which have a great potential to achieve high thermal conductivity and tailored coefficient of thermal expansion, since diamond has a high thermal conductivity (1500~2000 W/mK) [6,7] and a low coefficient of thermal expansion [8], and copper also has reasonable high thermal conductivity. However, the poor wettability of copper and diamond make it difficult to form an effective bond between the two materials, so that the synthesised diamond/copper composites usually have low thermal conductivity [9,10]. Furthermore, the acoustic impedance mismatch between the diamond and the copper causes low interfacial thermal conductance [11]. Two methods are usually used to help form an interfacial carbide layer between the diamond and the copper to improve the composites’ thermal conductivity: metal matrix alloying (CuX, X = Ti, B, Cr, Zr) [[12], [13], [14], [15], [16], [17], [18]] and diamond surface metallisation by carbide forming elements (W, Cr, B, Ti, Zr, Mo) [[19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29]]. The metal carbide interlayer with appropriate acoustic impedance (ZCu<Zcarbide<ZDiamond) acts as a bridge to reduce the acoustic mismatch and increase the interfacial bonding strength between the copper and the diamond [7,11], leading to improving interfacial thermal conductance. Theoretically, the interlayer features such as crystallinity and thickness could affect heat transfer across the interlayer structure. It reported that a disordered interface layer disturbs the propagation of vibrational waves and increases phonon scattering, which in turn reduces thermal conductivity [30]. The interfacial thermal conductance also decreases with the increase of interlayer thickness due to its increased thermal resistance. Furthermore, the content of carbide elements added may affect thermal conductivity by changing the interfacial layer thickness and solubility in the copper matrix [12,13]. However, the effect of interfacial layer’s microstructure on thermal conductivity is rarely studied, which is crucial to provide guidance in designing the interface microstructure of copper/diamond composites to achieve required thermal conductivity. Extensive research has been carried out to synthesise copper/diamond composite by infiltration [[31], [32], [33], [34]], spark plasma sintering (SPS) [[35], [36], [37]] and high pressure/high-temperature process [38]. As an alternative cost-effective method, hot-forging has been proved as a feasible approach to rapidly fabricate copper/diamond composites from powders [39,40]. In this work, we prepare a copper-titanium/diamond composite by hot forging of elemental powder mixture and investigate the detailed interface microstructure, effect of annealing heat treatments, and resultant thermal conductivity. ## 2. Experimental procedure ### 2.1. Materials preparation The raw materials used were copper powder (99.7 % purity, <45 μm), MBD8 diamond (Octagonal shape, 70-80 μm, supplied by Henan Huanghe Whirlwind Co., China), and hydride-dehydride titanium powder (99.6 % purity, <75 μm). The copper/diamond composite with a nominal composition of Cu-1.5 wt.%Ti/55 vol.%Diamond (Cu-Ti/Dia) was fabricated by hot forging of cold-pressed powder preform at 1050 °C. The powder mixture was blended in a V-type blender at 60 rpm for 90 min. After that, the powder mixture was mechanically pressed into a compact of 40 mm diameter and 33 mm height. The compact was then loaded in a steel can, heated up to 1050 ℃ and forged into a pancake in an argon atmosphere, and the forged pancake was slowly cooled down to room temperature. The detailed processing procedures could be found in Refs. [39,40]. For comparison, the pure copper and Cu-1.5 wt.%Ti (referred to as Cu-Ti) alloy billets were prepared by hot-pressing of Cu or Cu and Ti powder mixture at 1050 °C for 1.5 min (similar to hot forging). The cylindrical specimens, with a diameter of 12.7 mm, were cut from the hot-forged Cu-Ti/Dia composite billet by electrical discharge machining, subsequently encapsulated in quartz tubes with a vacuum of 1 × 10-5 Pa, and then heat treated at 800 °C for desired time. The heat treatment parameters for the encapsulated specimens are listed in Table 1. We named the forged Cu-Ti/Dia composite sample as Cu-Ti/Dia-0, and the heat treated samples as Cu-Ti/Dia-1 and Cu-Ti/Dia-2, respectively. Table 1   The heat treatment parameters for the encapsulated specimens. SampleHeating rate (℃/min)Temperature (℃)Holding time (h)Cooling rate (℃/min) Cu-Ti/Dia-13080015 Cu-Ti/Dia-2580025 ### 2.2. Materials characterisation The phase constitutions of the hot-forged and heat treated Cu-Ti/Dia composites were identified by X-ray diffraction (XRD, Cu Ka radiation). Field emission scanning electron microscope (SEM, HITACHI, S4700), equipped with EDS, was used to observe the composite matrix’s microstructure, diamond morphology and tensile fracture surface (tensile tests were conducted using an Instron (33R4204) machine at a strain rate of 10-3 s-1 at room temperature, and the tensile specimens have a dog-bone shape with gauge dimension of 2 mm × 2 mm × 20 mm). Two methods were used to prepare SEM samples for the observation of matrix microstructure and diamond morphology: (1) the bulk Cu-Ti/Dia composites were polished first and then etched with 30 % nitric acid for 4 min; and (2) the diamond particles were completely extracted from the composites using nitric acid. The detailed interface characteristics, including phase constitution, composition, and microstructures, were examined by TEM (FEI Tecnai G2 F20, USA) that equipped with a bruker detector for energy dispersive x-ray spectroscopy (EDX) analysis. A dual beam focused ion beam workstation system (FIB, FEI Helios NanoLabTM 600i, USA) was used to prepare TEM specimens from the hot-forged composite. ### 2.3. Thermal conductivity measurement The thermal conductivity (λ) was calculated using the equation λ = α ρ cp, where α is the thermal diffusivity, ρ is sample density, and cp is specific heat capacity. The thermal diffusivity was measured by a laser flash technique using an LFA 467 instrument (Netzsch, Germany) at room temperature, the rule of mixture (ROM) was used to calculate the specimen’s specific heat capacity based on the mass fraction of each component, and ρ was measured by the Archimedes principle. For measuring thermal diffusivity, the cylindrical samples used were cut from the Cu-Ti/Dia-0, Cu-Ti/Dia-1 and Cu-Ti/Dia-2 composites and had a dimension of Φ12.7 mm × 3 mm. All samples were ground with abrasive papers coded as 320# and 1000# and then cleaned ultrasonically in ethanol for 1 min. ## 3. Results and discussion ### 3.1. Phase constitution Fig. 1 shows the XRD patterns of Cu-Ti, Cu-Ti/Dia-0, Cu-Ti/Dia-1 and Cu-Ti/Dia-2 composites. Only the diffraction peak of copper appeared in the Cu-Ti sample, indicating that the titanium powders dissolved into the copper matrix to form Cu (Ti) solid solution. The peaks for diamond, TiC and Cu are detected in the other three composite samples, and the intensity of TiC peak is increased in the Cu-Ti/Dia-2 composite comparing to both Cu-Ti/Dia-0 and Cu-Ti/Dia-1 composites. This indicates that titanium is reacted with diamond to form TiC during hot forging, the phase constitution of titanium carbides in the composites keeps unchanged while annealing at 800 °C, but the amount of TiC is increased with prolonging annealing time. We can speculate that Ti is not completely reacted with diamond during hot forging at 1050 °C/1.5 min, and there is still a small amount of titanium to remain in the Cu-Ti/Dia-0 composite, which is not detectable by XRD due to its quantity is small or forming Cu (Ti) solid solution. The remained titanium or Cu(Ti) solid solution is further reacted with diamond under the annealing condition of 800 °C for 2 h to form more TiC, leading to that more TiC phase is detected in Cu-Ti/Dia-2 composite, showing higher intensity of TiC peak comparing to other two composites. The graphite peak is also identified in the Cu-Ti/Dia-2 composite but not in the other two composites, meaning that the diamond is transformed into graphite (the quantity is large enough to be detected by XRD) at 800 °C and holding the temperature for 2 h. It is reported that the transition elements could promote the transformation of sp3 bond to sp2 bond on the diamond surface and induce the graphitization of diamond [13]. This suggests that there is a high possibility for titanium remains in the Cu-Ti/Dia-0 composite, and the diamond particle is induced to be graphitized by the existence of titanium in the composite while annealing at 800 °C for 2 h. ### Fig. 1. Fig. 1.   XRD patterns of (a) Cu-Ti, (b) Cu-Ti/Dia-0, (c) Cu-Ti/Dia-1, and (d) Cu-Ti/Dia-2 composites. ### 3.2. Microstructure Fig. 2 shows the SEM microstructures of Cu-Ti/Dia-0, Cu-Ti/Dia-1 and Cu-Ti/Dia-2 composites. It is clear that the diamond particles are uniformly distributed in the copper matrix (Fig. 2a), and no visible gaps are observed between the diamond and the copper matrix, except that only several diamond particles are detached from the copper matrix to leave pits showing the original diamond shape (indicated by red arrow), and the detachment of diamond from copper is likely caused by mechanical polishing during preparing SEM specimens. This suggests that the interfacial bonding between the diamond and the copper matrix is strong for the hot-forged composite (Cu-Ti/Dia-0). After annealing at 800 °C, it can see more pits formed by the detachment of diamond from copper matrix, as shown in Fig. 2d and g, and large cracks are visible between the diamond and the copper in the Cu-Ti/Dia-1 composite (Fig. 2e and f), which are not observed in the Cu-Ti/Dia-0 composite (Fig. 2b and c). This implies that the interfacial bonding between the diamond and the copper for the Cu-Ti/Dia-1 and Cu-Ti/Dia-2 composites becomes weaker than that of the Cu-Ti/Dia-0 composite. High magnification SEM images suggest that the thickness of the formed interface is thicker in diamond-{100} facets than in diamond-{111} facets for all the copper/diamond composites, as shown in Fig. 2b, c, e, f, h and i. ### Fig. 2. Fig. 2.   SEM microstructures of Cu-Ti/Dia-0 (a-c), Cu-Ti/Dia-1 (d-f) and Cu-Ti/Dia-2 (g-i) composites. To further investigate the microstructure of interface formed between the diamond and the copper, the morphology of extracted diamond particles from the Cu-Ti/Diam-0, Cu-Ti/Dia-1 and Cu-Ti/Dia-2 composites are presented in Fig. 3, respectively. It can be seen that the interface is formed on both diamond-{100} and -{111} facets for those three composites (Fig. 3a, d and g), and the statistical analysis results suggest that the coverage of diamond particles by the interface is over 95 %. We have already reported that TiC interfacial layer is easy to form and grow on the diamond-{100} facets comparing to the diamond-{111} facets for the hot-forged copper-Ti/diamond composites [40], because the C atoms are bonded by two C—C bonds on diamond-{100} facets and three C—C bonds on diamond-{111} facets, resulting in the solubility of C atoms from the{100}facets being higher than that from the {111} facets [38]. This leads to a dense TiC layer form on the diamond-{100} facets (Fig. 3b) and a porous and network-like TiC layer structure form on the diamond-{111} facets (Fig. 3c). After annealing, the interface microstructures on both of diamond -{100} and -{111} facts of the 800 °C-heat treated composites are distinct with those of the as-hot-forged Cu-Ti/Dia-0 composite, as shown in Fig. 3e, f, h and i. Besides the TiC particles grow coarse, several cracks are visible on the diamond-{100} facets for Cu-Ti/Dia-1 composite (Fig. 3e), but no cracks are observed on the diamond-{100} facets for Cu-Ti/Dia-2 composite (Fig. 3h). Crack-free interface formed in the Cu-Ti/Dia-0 composite is mainly attributed to (1) constrained deformation during the process of steel-can forging and (2) controlled cooling afterward. However, since the coefficients of thermal expansion (CTE) of diamond, copper, and titanium carbides are distinct (2.3 × 10-6 /K for diamond [16], 16.5 × 10-6 /K for pure copper [16], and about 9.5 × 10-6 /K for titanium carbides [41]), the crack is easy to form in the composite during fast heating and cooling. Furthermore, TiC is formed by the reaction of the added titanium and diamond particle, indicating that the bond between the TiC and the diamond is relatively stronger than that between the TiC and the copper matrix. This leads to the cracks are visible between the TiC interface and the copper matrix in the Cu-Ti/Dia-1 composite. The formed cracks are easy to cause spallation of the interfacial layer on the diamond surface, and this may be the primary reason to result in the weak interfacial bonding between the diamond and the copper and formation of gaps in the Cu-Ti/Dia-1 composite. A dense TiC layer structure can be seen in both Cu-Ti/Dia-1 (Fig. 3f) and Cu-Ti/Dia-2 (Fig. 3i) composites, and TiC particle clusters (marked as yellow arrow) are visible in Fig. 3h and i, meaning that some TiC particles grow significantly and agglomerate during annealing, and the agglomeration of TiC particles become serious with increasing annealing time. This is evidenced by more TiC particle clusters are observed on the diamond-{111} facets in Cu-Ti/Dia-2 composite than in Cu-Ti/Dia-1 composite. ### Fig. 3. Fig. 3.   Surface morphology of the extracted diamond particles from the copper/diamond composites: (a-c) Cu-Ti/Dia-0, (d-f) Cu-Ti/Dia-1, (g-i) Cu-Ti/Dia-2. Fig. 4 shows the TEM microstructure and EDS analysis results of the interface structure between the diamond-{100} facet and the copper matrix in the Cu-Ti/Dia-0 composite. Three distinct regions are visible in Fig. 4a (as indicated by the red dash line), which are bright region (A), layered structure region (Bi and Bii), and dark region (C). To determine the phase constitutions, EDS point analysis was performed to identify the chemical composition at the related positions in Fig. 4a. Results (Fig. 4b) show that the chemical composition at points 1, 3 and 4 is composed of 100 wt. %C, 100 wt. %Cu and 100 wt.% Cu, respectively, and at point 2 contains 35.87 wt. %Cu, 60.85 wt. %Ti, and 3.28 wt. %Cu. The EDS line scanning analysis across the layered structure region (the line position is as indicated in Fig. 4a), as exhibited in Fig. 4c, clearly shows that a layer that has a thickness of 80 nm-100 nm in the layered structure region is rich in both of Ti and C. Combining the XRD analysis results, it suggests that the regions A and C primarily consist of diamond and copper, respectively, Bi layer (in the layered structure region) is the new formed TiC interfacial layer (about 80-100 nm) between the diamond and the copper, and Bii layer is part of copper matrix but have a distinct boundary with the copper matrix in region C, which may be caused by different deformation distribution between the Bii copper layer and the copper matrix in region C during hot forging (This will be discussed in the later section). The interface between the diamond and the TiC layer is continuous and straight, while the interface between the TiC layer and the Bii-copper layer has a serration shape. This indicates that the TiC nucleates on the diamond surface heterogeneously and then grows preferentially from the diamond surface into copper matrix along a certain crystal orientation. Furthermore, there are no voids and minor cracks visible between the interfacial TiC layer and the diamond/copper, suggesting that the diamond and copper matrix is well bridged by the new formed TiC layer in Cu-Ti/Dia-0 composite. ### Fig. 4. Fig. 4.   TEM analysis of Cu-Ti/Dia-0 composite: (a) interface, (b) point EDS, and (c) line-scanning EDS. More detailed interface characteristics analysis results by HRTEM are presented in Fig. 5. There are two interfaces between the TiC interfacial layer and its adjacent layers of diamond and Bii-copper (Fig. 5a), which are marked as interfaces 1 and 2 in Fig. 5a, respectively. The interface between Bii-copper and primary copper matrix (region C in Fig. 5a) is marked as interface 3. FFT and IFFT analysis, conducted at the square b area in Fig. 5a, shows that a thin layer of amorphous carbon is formed (Fig. 5b). This implies that the diamond is transformed to amorphous carbon, and this is likely induced by the existence of titanium since carbide forming elements can act as a catalyst for that transformation [13]. This further suggests that titanium remains in the hot-forged Cu-Ti/Dia-0 composite and it helps form a large amount of graphite in Cu-Ti/Dia-2 composite during long period of annealing. Next to the amorphous carbon, TiC layer is identified (Fig. 5b and d), attributed to the measured lattice interplanar spacing is 0.246 nm in Fig.5d, which matches the spacing of TiC-(111) plane. Therefore, it can speculate that the interfacial TiC layer, between the diamond and the copper, is formed by the reaction of diamond/amorphous carbon and Ti during hot forging. For the interface 2, high resolution TEM image, acquired at the square c in Fig. 5a, is shown in Fig. 5c and the related FFT and IFFT analysis results are presented in Fig. 5e and f. The Cu-(111), -($\bar{11}$ 1), and -(002), and TiC -(111) planes are indexed. The FFT diffraction spots clearly show that the planes of (111)TiC and (002)Cu approximately overlap with each other along the zone axis of [1 $\bar{1}$ 0]TiC or [1 $\bar{1}$ 0]Cu, having the orientation relationship: (111)TiC//(002)Cu&[1 $\bar{1}$ 0]TiC//[1 $\bar{1}$ 0]Cu. Based on the measurement of inter-planar spacing from the IFFT image (Fig. 5e), the lattice mismatch(ε) between Cu and TiC is determined to be 14.6 % by ε = 2(αTiC-αCu)/(αTiC+αCu) (where α refers to the respective lattice parameter of each phase) [42]. According to the Bramfitt lattice matching theory [43], a semi-coherent interface is formed, suggesting that the TiC/Cu interface is bonded semi-coherently in the Cu-Ti/Dia-0 composite. It is well accepted that the semi-coherent interface has lower interface energy than that of the incoherent interface and the diffusion/mechanical interface [44]. Thus, a strong interface bonding between the interfacial TiC layer and the copper matrix is formed in the Cu-Ti/Dia-0 composite. A certain amount of TiC-bonded Cu is defective, which results in periodic loss of the lattice and formation of a distortion region at the TiC/Cu interface, as shown in Fig. 5e. The interface distortion zone usually contains high density of dislocations to accommodate the internal stress caused by the hot forging and formation of TiC [45], which in turn helps to improve the adhesion of the interface. Therefore, the generated semi-coherent bonding and interface distortion area both contribute to the enhanced interfacial bonding of the TiC/Cu interface. The FFT/IFFT results obtained at the square g in Fig. 5a confirm the presence of (111), ($\bar{11}$ 1) and (002) planes of Cu (Fig. 5g), further suggesting that the formation of interface 3 is caused by copper texture not phase differences. This is because the strong semi-coherent copper-titanium carbide interface restrains the deformation of copper near titanium carbide during hot forging, however, the deformation of copper that is far away from the interface is relatively large, thus resulting in the formation of a deformation interface within the copper matrix. ### Fig. 5. Fig. 5.   Interface characteristics of copper-Ti/diamond. (a) Representative TEM image; (b) and (c) HRTEM images recorded at the marked b and c regions in (a); (d), (e) and (f) HRTEM images recorded at the marked d, e, f regions in (b) and (c); and (g) HRTEM images recorded at the marked g region in (a). ### 3.3. Thermal conductivity The thermal conductivity (k) and thermal diffusivity (α) of Cu-Ti/Dia-0, Cu-Ti/Dia-1, Cu-Ti/Dia-2 composites, hot-pressed pure Cu and Cu-Ti (Cu-1.5 wt.%Ti) alloy are presented in Fig. 6. The fabricated pure Cu billet (98 % of the theoretical density) has lower thermal conductivity (224 W/mK) than the pure copper (385-400 W/mK), and this is mainly caused by the deformed structure (such as high dislocation density and a large number of sub-grains) [40]. With adding 1.5 wt. % of titanium into the copper matrix, the thermal conductivity of Cu-Ti alloy is significantly decreased, having a value of 64 W/mK, attributed to the disturbing (scattering) of phonon movement by the dissolved Ti atoms besides the influence of deformed structure [13,46]. The measured thermal conductivity of Cu-Ti/Dia composite is 410 W/mK, which is almost 7 times higher than that of the Cu-Ti alloy, and twice that of the hot-pressed pure copper billet. 800 °C-annealing has an adverse effect on the thermal conductivity of the Cu-Ti/Dia-0 composite, leading to the thermal conductivity is reduced to 250 W/mK for the Cu-Ti/Dia-1 composite and 193 W/mK for the Cu-Ti/Dia-2 composite. ### Fig. 6. Fig. 6.   (a) Thermal conductivity and thermal diffusivity of hot-pressed Cu and Cu-Ti alloy, and Cu-Ti/Dia-0, Cu-Ti/Dia-1, and Cu-Ti/Dia-2 composites, (b) thermal conductivity comparison between current research and published papers. Differential effective medium (DEM) model has been widely used to predict the copper/diamond composite’s theoretical thermal conductivity, in which both interfacial thermal conductance and diamond particle size are taken into consideration [7]: $\left( 1-{{\text{V}}_{\text{d}}} \right){{\left( {{\text{k}}_{\text{c}}}/{{\text{k}}_{\text{m}}} \right)}^{1/3}}=\left( \text{k}_{\text{d}}^{\text{eff}}-{{\text{k}}_{\text{c}}} \right)/\text{k}_{\text{d}}^{\text{eff}}-{{\text{k}}_{\text{m}}})\text{withk}_{\text{d}}^{\text{eff}}={{\text{k}}_{\text{d}}}/(1+{{\text{k}}_{\text{d}}}/\left( \text{R}{{\text{G}}_{\text{c}}} \right))$ where kc, kd and km are the thermal conductivity of composite, diamond reinforcement and matrix, respectively, kdeff is the effective thermal conductivity of diamond reinforcement, R is diamond radius, Vd is the volume fraction of diamond reinforcement, and Gc is the interfacial thermal conductance. To calculate the theoretical thermal conductivity of Cu-Ti/Dia-0 composite using the DEM model, the adopted thermal conductivity of diamond and Cu matrix are 1500 and 224 W/mK, respectively. The average diamond radius is taken as 38 μm and diamond volume fraction is 55 %. For simplifying the calculation, the thinner amorphous carbon layer is neglected and we consider the interfacial layer between the diamond and the copper is TiC. Therefore, the interfacial thermal conductance (Gc) can be expressed as follows [7,11]: $1/G_{c}=1/{{\text{G}}_{\text{Cu}/\text{TiC}}}+\text{d}/{{\text{k}}_{\text{TiC}}}+1/{{\text{G}}_{\text{TiC}/\text{diamond}}}$ Where kTiC thermal conductivity of TiC, d is the interface thickness, and GCu/TiC and GTiC/diamond are the thermal conductance of Cu/TiC and TiC/diamond, respectively, which can be determined by the acoustic mismatch model (AMM) [7]: G = 0.25CAνAqABαAB Where C is the heat capacity per unit volume, ν is the phonon velocity, q is the fraction of phonons incident within a critical angle (θc) at interface, and α is the transmission coefficient of phonons incident within the critical angle. The subscript “A” and “B” denote the incident and outgoing side of phonons, respectively. Thus, the calculated GCu/TiC and GTiC/diamond are 2.01 × 108 W/m2K and 5.8 × 108 W/m2K, following Eq(3). Taking d = 80 nm (in current research) and kTiC = 21 W/mK [7], the calculated theoretical thermal conductivity of Cu-Ti/Dia-0 is 552 W/mK, suggesting that the thermal conductivity of the hot-forged Cu-Ti/Dia-0 composite is about 74 % of its theoretical value. The thermal conductivity of Cu-Ti/Dia-0 composite is comparable and even better than that of the reported copper/diamond composites with a diamond particle size of about 75 μm [37,[47], [48], [49], [50], [51], [52], [53]] (as shown in Fig. 6b). This illustrates that effective bonding is established between the diamond and the copper in the Cu-Ti/Dia-0 composite by adding 1.5 wt. % titanium, enabling the composite’s thermal conductivity significantly improved. The primary reasons include the following aspects: (1) A crack-free TiC interface layer, with a serrated shape and thickness of about 80 nm, is formed between the diamond and the copper, having a strong bonding strength and benefiting interfacial thermal conductance. It can also be proved by the composite’s tensile fracture surface (Fig. 7), where the copper matrix is attached to the diamond surface, the ductile dimples of the copper matrix are observed, and diamond particles’ transgranular fractures are visible; (2) semi-coherent interface can significantly reduce the interface strain energy comparing to the non-coherent interface, and ordered interface improve phonon transmission so that benefiting interfacial thermal conductance [54]; (3) TiC interlayer acts as a bridge and help reduce the acoustic mismatch between the diamond and the copper (ZCu<Zcarbide<ZDiamond) [7,11], decreasing phonon scattering between the diamond and the copper and thereby reducing interfacial thermal resistance; and (4) Pressure at the interface induced by hot forging can alter the vibrational properties and interactions between atoms, leading to decreasing atomic distance and increasing the frequency of atom’s vibration [30]. This helps obtain better interface contact, resulting in reducing phonon scattering to achieve higher thermal conduction. However, an amorphous carbon layer formed between TiC and diamond, the intrinsically disordered structure in the amorphous carbon layer leads to a tortuous path for the propagation of vibrational waves and thus increases the phonon scattering. Therefore, the formation of amorphous carbon layer in diamond particle near to the interface layer has an adverse effect on the thermal conductivity of Cu-Ti/Dia-0 composite. ### Fig. 7. Fig. 7.   The tensile fracture surface of Cu-Ti/Dia-0 composite. The formation of crack, interface spallation, agglomeration of TiC particles, etc., leading to weak interface bonding between the diamond and the copper matrix in the 800 °C-annealed copper/diamond composites (Cu-Ti/Dia-1 and Cu-Ti/Dia-2), this is the primary reason to cause significant decrease in the thermal conductivity (250 W/mK for Cu-Ti/Dia-1 and 193 W/mK for Cu-Ti/Dia-2, comparing to 410 W/mK for Cu-Ti/Dia-0). Furthermore, the serious graphitization of diamond is another reason to further reduce the thermal conductivity of Cu-Ti/Dia-2 composite due to the graphite’s thermal conductivity is relatively small comparing to the diamond. Thus, we may further improve the hot-forged Cu-Ti/Dia composite’s thermal conductivity by (1) eliminating the deformed structure through appropriate heat treatment without coarsening the interfacial TiC particles and causing interface spallation, (2) optimizing the thickness of formed TiC interfacial layer to reduce the interface thermal resistance and acoustic mismatch, and (3) increasing the diamond particle size to reduce the amount of interface. We will address these factors in our future publications. ## 4. Conclusions (1) An effective TiC interfacial layer, with a thickness of about 80 nm, was formed between the diamond particle and the copper matrix in the hot-forged Cu-Ti/Dia-0 composite. TiC interfacial layer is continuous in the fabricated composite, and it has an orientation relationship of (111)TiC//(002)Cu&[1 $\bar{1}$ 0]TiC//[1 $\bar{1}$ 0]Cu with the copper matrix. (2) Semi-coherent bond was formed between TiC and Cu matrix. This interface structure enables the Cu-Ti/Dia-0 to have higher thermal conductivity than the most of reported data, with a value of 410 W/mK (this is about 74 % of the theoretical value). (3) Titanium may induce the diamond surface to graphitize and form graphite/amorphous carbon layer in diamond particles during forging and long-period annealing at 800 °C, and this has a detrimental effect on the hot-forged Cu-Ti/Dia composite together with formation of deformation structure in the copper matrix. (4) The TiC morphology difference between diamond-{100} and-{111} facets is eliminated after 800 °C-annealing, but the annealing causes TiC particles to grow coarse and become agglomeration, and TiC interfacial layer cracking and spallation, which has an adverse effect on the composite’s thermal conductivity. (5) Appropriate heat treatments need to be optimised to improve the thermal conductivity of hot-forged Cu-Ti/Dia composite by eliminating deformed structure in the copper matrix with limit/without impacts on the formed TiC interfacial layer. ## Acknowledgments This material is based upon work supported by the Air Force Office of Scientific Research under awasrd number FA2386-17-1-4025. ## Reference By original order By published year By cited within times By Impact factor A.L. Moore, L. Shi, Mater. Today 17 (2014) 163-174. J.S. Kang, M. Li, H. Wu, H. Nguyen, Y. Hu, Science 361 (2018) 575-578. Improving the thermal management of small-scale devices requires developing materials with high thermal conductivities. The semiconductor boron arsenide (BAs) is an attractive target because of ab initio calculation indicating that single crystals have an ultrahigh thermal conductivity. We synthesized BAs single crystals without detectable defects and measured a room-temperature thermal conductivity of 1300 watts per meter-kelvin. Our spectroscopy study, in conjunction with atomistic theory, reveals that the distinctive band structure of BAs allows for very long phonon mean free paths and strong high-order anharmonicity through the four-phonon process. The single-crystal BAs has better thermal conductivity than other metals and semiconductors. Our study establishes BAs as a benchmark material for thermal management applications and exemplifies the power of combining experiments and ab initio theory in new materials discovery. H.F. Zhou, N. Du, J.D. Guo, S. Liu, J. Mater. Sci. Technol. 35 (2019) 1797-1802. E. Lee, E. Menumerov, R.A. Hughes, S. Neretina, T.F. Luo, ACS Appl. Mater. Interfaces 10 (2018) 34690-34698. C. Monachon, L. Weber, Acta Mater. 73 (2014) 337-346. J. Anaya, S. Rossi, M. Alomari, E. Kohn, L. Tóth, B. Pécz, K.D. Hobart, T.J. Anderson, T.I. Feygelson, B.B. Pate, M. Kuball, Acta Mater. 103 (2016) 141-152. G. Chang, F.Y. Sun, J.L. Duan, Z.F. Che, X.T. Wang, J. Wang, J.G. Wang, M.J. Kim, H.L. Zhang, Acta Mater. 160 (2018) 235-246. C.J.H. Wort, R.S. Balmer, Mater. Today 11 (2008) 22-28. L. Weber, R. Tavangar, Scr. Mater. 57 (2007) 988-991. Y.P. Wu, J.B. Luo, Y. Wang, G.L. Wang, H. Wang, Z.Q. Yang, G.F. Ding, Ceram. Int. 45 (2019) 13225-13234. G. Chang, F.Y. Sun, L.H. Wang, Z.X. Che, X.T. Wang, J.G. Wang, M.J. Kim, H.L. Zhang, ACS Appl. Mater. Interfaces 11 (2019) 26507-26517. L.H. Wang, J.W. Li, M. Catalano, G.Z. Bai, N. Li, J.J. Dai, X.T. Wang, H.L. Zhang, J. G. Wang, M.J. Kim, Compos. Part A-Appl. Sci. Manuf. 113 (2018) 76-82. G.Z. Bai, L.H. Wang, Y.J. Zhang, X.T. Wang, J.G. Wang, M.J. Kim, H.L. Zhang, Mater. Charact. 152 (2019) 265-275. T. Schubert, Ł. Ciupi´nski, W. Zieli´nski, A. Michalski, T. Weißgärber, B. Kieback, Scr. Mater. 58 (2008) 263-266. J.W. Li, X.T. Wang, Y. Qiao, Y. Zhang, Z.B. He, H.L. Zhang, Scr. Mater. 109 (2015) 72-75. G.Z. Bai, Y.J. Zhang, J.J. Dai, L.H. Wang, X.T. Wang, J.G. Wang, M.J. Kim, X.Z. Chen, H.L. Zhang, J. Alloys. Compd. 794 (2019) 473-481. J.W. Li, H.L. Zhang, L.H. Wang, Z.F. Che, Y. Zhang, J.G. Wang, M.J. Kim, X.T. Wang, Compos. Part A-Appl. Sci. Manuf. 91 (2016) 189-194. G.Z. Bai, N. Li, X.T. Wang, J.G. Wang, M.J. Kim, H.L. Zhang, J. Alloys. Compd. 735 (2018) 1648-1653. C. Zhang, R.C. Wang, Z.Y. Cai, C.Q. Peng, Y. Feng, L. Zhang, Surf. Coat. Technol. 277 (2015) 299-307. J.Q. Sang, W.L. Yang, J.J. Zhu, L.C. Fu, D.Y. Li, L.P. Zhou, J. Alloys. Compd. 740 (2018) 1060-1066. V.M. das Chagas, M.P. PeÇ anha, R. da Silva Guimarães, A.A.A. dos Santos, M.G. de Azevedo, M. Filgueira, J. Alloys. Compd. 791 (2019) 438-444. H.J. Cho, Y.J. Kim, U. Erb, Compos. B-Eng. 155 (2018) 197-203. S.B. Ren, X.Y. Shen, C.Y. Guo, N. Liu, J.B. Zang, X.B. He, X.H. Qu, Compos. Sci. Technol. 71 (2011) 1550-1555. Y.H. Sun, C. Zhang, L.K. He, Q.N. Meng, B.C. Liu, K. Gao, J.H. Wu, Sci. Rep. 8 (2018) 11104. Diamond/Al composites containing B4C-coated and uncoated diamond particles were prepared by powder metallurgy. The microstructure, bending strength and thermal conductivity were characterized considering the B4C addition and diamond fraction. The influence of B4C coating and fraction of diamond on both bending strength and thermal conductivity were investigated. The bending strength increased with decreasing diamond fraction. Moreover, addition of B4C coating led to an obvious increase in bending strength. The peak value at 261.2 MPa was achieved in the composite with 30 vt.% B4C-coated diamond particles, which was about twice of that for 30 vt.% uncoated diamond/Al composite (140.1 MPa). The thermal conductivity enhanced with the increase in diamond fraction, and the highest value (352.7 W/m.K) was obtained in the composite with 50 vt.% B4C-coated diamond particles. Plating B4C on diamond gave rise to the enhancement in bending strength and thermal conductivity for diamond/Al composites, because of the improvement of the interfacial bonding between diamond and aluminum matrix. S.D. Ma, N.Q. Zhao, C.S. Shi, E.Z. Liu, C.N. He, F. He, L.Y. Ma, Appl. Surf. Sci. 402 (2017) 372-383. Y.P. Pan, X.B. He, S.B. Ren, M. Wu, X.H. Qu, J. Mater. Sci. 53 (2018) 8978-8988. J. Grzonka, M.J. Kruszewski, M. Rosi´nski, Ł. Ciupi´nski, A. Michalski, K.J. Kurzydłowski, Mater. Charact. 99 (2015) 188-194. Q.P. Kang, X.B. He, S.B. Ren, L. Zhang, M. Wu, C.Y. Guo, W. Cui, X.H. Qu, Appl. Therm. Eng. 60 (2013) 423-429. X.Z. Wu, L.Y. Li, W. Zhang, M.X. Song, W.L. Yang, K. Peng, Diam. Relat. Mater. 98 (2019), 107467. N. Mehra, L.W. Mu, T. Ji, X.T. Yang, J. Kong, J.W. Gu, J.H. Zhu, Appl. Mater. Today 12 (2018) 92-130. S.V. Kidalov, F.M. Shakhov, Materials 2 (2009) 2467-2495. X.Y. Shen, X.B. He, S.B. Ren, H.M. Zhang, X.H. Qu, J. Alloys. Compd. 529 (2012) 134-139. J.W. Li, H.L. Zhang, Y. Zhang, Z.F. Che, X.T. Wang, J. Alloys. Compd. 647 (2015) 941-946. Y.H. Dong, R.Q. Zhang, X.B. He, Z.G. Ye, X.H. Qu, Mater. Sci. Eng. B-Solid State Mater. Adv. Technol. 177 (2012) 1524-1530. H. Bai, N.G. Ma, J. Lang, C.X. Zhu, J. Alloys. Compd. 580 (2013) 382-385. H. Bai, N.G. Ma, J. Lang, C.X. Zhu, Y. Ma, Compos. B-Eng. 52 (2013) 182-186. K. Chu, Z.F. Liu, C.C. Jia, H. Chen, X.B. Liang, W.J. Gao, W.H. Tian, H. Guo, J. Alloys. Compd. 490 (2010) 453-458. H. Chen, C.C. Jia, S.J. Li, J. Mater. Sci. 47 (2012) 3367-3375. F. Yang, W. Sun, A. Singh, L. Bolzoni, JOM 70 (2018) 2243-2248. F. Yang, Y. Su, S.Q. Jia, Q.Y. Zhao, L. Bolzoni, T. Li, M. Qian, JOM 71 (2019) 4867-4871. J.H. Richardson, J. Am. Ceram. Soc. 48 (1965) 497-499. S.H. Jiang, H. Wang, Y. Wu, X.J. Liu, H.H. Chen, M.J. Yao, B. Gault, D. Ponge, D. Raabe, A. Hirata, M.W. Chen, Y.D. Wang, Z.P. Lu, Nature 544 (2017) 460-464. Next-generation high-performance structural materials are required for lightweight design strategies and advanced energy applications. Maraging steels, combining a martensite matrix with nanoprecipitates, are a class of high-strength materials with the potential for matching these demands. Their outstanding strength originates from semi-coherent precipitates, which unavoidably exhibit a heterogeneous distribution that creates large coherency strains, which in turn may promote crack initiation under load. Here we report a counterintuitive strategy for the design of ultrastrong steel alloys by high-density nanoprecipitation with minimal lattice misfit. We found that these highly dispersed, fully coherent precipitates (that is, the crystal lattice of the precipitates is almost the same as that of the surrounding matrix), showing very low lattice misfit with the matrix and high anti-phase boundary energy, strengthen alloys without sacrificing ductility. Such low lattice misfit (0.03 +/- 0.04 per cent) decreases the nucleation barrier for precipitation, thus enabling and stabilizing nanoprecipitates with an extremely high number density (more than 10(24) per cubic metre) and small size (about 2.7 +/- 0.2 nanometres). The minimized elastic misfit strain around the particles does not contribute much to the dislocation interaction, which is typically needed for strength increase. Instead, our strengthening mechanism exploits the chemical ordering effect that creates backstresses (the forces opposing deformation) when precipitates are cut by dislocations. We create a class of steels, strengthened by Ni(Al,Fe) precipitates, with a strength of up to 2.2 gigapascals and good ductility (about 8.2 per cent). The chemical composition of the precipitates enables a substantial reduction in cost compared to conventional maraging steels owing to the replacement of the essential but high-cost alloying elements cobalt and titanium with inexpensive and lightweight aluminium. Strengthening of this class of steel alloy is based on minimal lattice misfit to achieve maximal precipitate dispersion and high cutting stress (the stress required for dislocations to cut through coherent precipitates and thus produce plastic deformation), and we envisage that this lattice misfit design concept may be applied to many other metallic alloys. B.L. Bramfitt, Metall. Mater. Trans. B. 1 (1970) 1987-1995. Yf. Zhao, Z. Qian, X. Ma, H.W. Chen, T. Gao, Y.Y. Wu, X.F. Liu, ACS Appl. Mater. Interfaces 8 (2016) 28194-28201. L. Jiang, H.M. Wen, H. Yang, T. Hu, T. Topping, D.L. Zhang, E.J. Lavernia, J.M. Schoenung, Acta Mater. 89 (2015) 327-343. C. Monachon, L. Weber, C. Dames, Annu. Rev. Mater. Res. 46 (2016) 433-463. Y. Zhang, J.W. Li, L.L. Zhao, H.L. Zhang, X.T. Wang, Mater. Des. 63 (2014) 838-847. K. Chu, C.C. Jia, X.B. Liang, H. Chen, Metall. Mater. Trans. A-Phys. Metall. Mater. Sci. 17 (2010) 234-240. C. Xue, J.K. Yu, Surf. Coat. Technol. 217 (2013) 46-50. H.J. Cho, D. Yan, J. Tam, U. Erb, J. Alloys. Compd. 791 (2019) 1128-1137. K. Yoshida, H. Morigami, Microelectron. Reliab. 44 (2004) 303-308. K. Raza, F.A. Khalid, T. Mabrouki, Mater. Des. 86 (2015) 248-258. Y. Zhang, H.L. Zhang, J.H. Wu, X.T. Wang, Scr. Mater. 65 (2011) 1097-1100. T. Beechem, P.E. Hopkins, J. Appl. Phys. 106 (2009), 124301. ISSN: 1005-0302 CN: 21-1315/TG Editorial Office: Journal of Materials Science & Technology , 72 Wenhua Rd., Shenyang 110016, China Tel: +86-24-83978208 E-mail:JMST@imr.ac.cn
2020-12-05 05:11:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45873305201530457, "perplexity": 8339.310016668325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746320.91/warc/CC-MAIN-20201205044004-20201205074004-00430.warc.gz"}
http://www.tennisabstract.com/blog/2017/01/15/measuring-the-performance-of-tennis-prediction-models/
# Measuring the Performance of Tennis Prediction Models With the recent buzz about Elo rankings in tennis, both at FiveThirtyEight and here at Tennis Abstract, comes the ability to forecast the results of tennis matches. It’s not far fetched to ask yourself, which of these different models perform better and, even more interesting, how they fare compared to other ‘models’, such as the ATP ranking system or betting markets. For this, admittedly limited, investigation, we collected the (implied) forecasts of five models, that is, FiveThirtyEight, Tennis Abstract, Riles, the official ATP rankings, and the Pinnacle betting market for the US Open 2016. The first three models are based on Elo. For inferring forecasts from the ATP ranking, we use a specific formula1 and for Pinnacle, which is one of the biggest tennis bookmakers, we calculate the implied probabilities based on the provided odds (minus the overround)2. Next, we simply compare forecasts with reality for each model asking If player A was predicted to be the winner ($P(a) > 0.5$), did he really win the match? When we do that for each match and each model (ignoring retirements or walkovers) we come up with the following results. ```Model % correct Pinnacle 76.92% 538 75.21% TA 74.36% ATP 72.65% Riles 70.09% ``` What we see here is how many percent of the predictions were actually right. The betting model (based on the odds of Pinnacle) comes out on top followed by the Elo models of FiveThirtyEight and Tennis Abstract. Interestingly, the Elo model of Riles is outperformed by the predictions inferred from the ATP ranking. Since there are several parameters that can be used to tweak an Elo model, Riles may still have some room left for improvement. However, just looking at the percentage of correctly called matches does not tell the whole story. In fact, there are more granular metrics to investigate the performance of a prediction model: Calibration, for instance, captures the ability of a model to provide forecast probabilities that are close to the true probabilities. In other words, in an ideal model, we want 70% forecasts to be true exactly in 70% of the cases. Resolution measures how much the forecasts differ from the overall average. The rationale here is, that just using the expected average values for forecasting will lead to a reasonably well-calibrated set of predictions, however, it will not be as useful as a method that manages the same calibration while taking current circumstances into account. In other words, the more extreme (and still correct) forecasts are, the better. In the following table we categorize the set of predictions into bins of different probabilities and show how many percent of the predictions were correct per bin. This also enables us to calculate Calibration and Resolution measures for each model. ```Model 50-59% 60-69% 70-79% 80-89% 90-100% Cal Res Brier 538 53% 61% 85% 80% 91% .003 .082 .171 TA 56% 75% 78% 74% 90% .003 .072 .182 Riles 56% 86% 81% 63% 67% .017 .056 .211 ATP 50% 73% 77% 84% 100% .003 .068 .185 Pinnacle 52% 91% 71% 77% 95% .015 .093 .172 ``` As we can see, the predictions are not always perfectly in line with what the corresponding bin would suggest. Some of these deviations, for instance the fact that for the Riles model only 67% of the 90-100% forecasts were correct, can be explained by small sample size (only three in that case). However, there are still two interesting cases (marked in bold) where sample size is better and which raised my interest. Both the Riles and Pinnacle models seem to be strongly underconfident (statistically significant) with their 60-69% predictions. In other words, these probabilities should have been higher, because, in reality, these forecasts were actually true 86% and 91% percent of the times.3 For the betting aficionados, the fact that Pinnacle underestimates the favorites here may be really interesting, because it could reveal some value as punters would say. For the Riles model, this would maybe be a starting point to tweak the model. In the last three columns Calibration (the lower the better), Resolution (the higher the better), and the Brier score (the lower the better) are shown. The Brier score combines Calibration and Resolution (and the uncertainty of the outcomes) into a single score for measuring the accuracy of predictions. The models of FiveThirtyEight and Pinnacle (for the used subset of data) essentially perform equally good. Then there is a slight gap until the model of Tennis Abstract and the ATP ranking model come in third and fourth, respectively. The Riles model performs worst in terms of both Calibration and Resolution, hence, ranking fifth in this analysis. To conclude, I would like to show a common visual representation that is used to graphically display a set of predictions. The reliability diagram compares the observed rate of forecasts with the forecast probability (similar to the above table). The closer one of the colored lines is to the black line, the more reliable the forecasts are. If the forecast lines are above the black line, it means that forecasts are underconfident, in the opposite case, forecasts are overconfident. Given that we only investigated one tournament and therefore had to work with a low sample size (117 predictions), the big swings in the graph are somewhat expected. Still, we can see that the model based on ATP rankings does a really good job in preventing overestimations even though it is known to be outperformed by Elo in terms of prediction accuracy. To sum up, this analysis shows how different predictive models for tennis can be compared among each other in a meaningful way. Moreover, I hope I could exhibit some of the areas where a model is good and where it’s bad. Obviously, this investigation could go into much more detail by, for example, comparing the models in how well they do for different kinds of players (e.g., based on ranking), different surfaces, etc. This is something I will spare for later. For now, I’ll try to get my sleeping patterns accustomed to the schedule of play for the Australian Open, and I hope, you can do the same. This is a guest article by me, Peter Wetz. I am a computer scientist interested in racket sports and data analytics based in Vienna, Austria. #### Footnotes 1. $P(a) = a^e / (a^e + b^e)$ where $a$ are player A’s ranking points, $b$ are player B’s ranking points, and $e$ is a constant. We use $e = 0.85$ for ATP men’s singles. 2. The betting market in itself is not really a model, that is, the goal of the bookmakers is simply to balance their book. This means that the odds, more or less, reflect the wisdom of the crowd, making it a very good predictor. 3. As an example, one instance, where Pinnacle was underconfident and all other models were more confident is the R32 encounter between Ivo Karlovic and Jared Donaldson. Pinnacle’s implied probability for Karlovic to win was 64%. The other models (except the also underconfident Riles model) gave 72% (ATP ranking), 75% (FiveThirtyEight), and 82% (Tennis Abstract). Turns out, Karlovic won in straight sets. One factor at play here might be that these were the US Open where more US citizens are likely to be confident about the US player Jared Donaldson and hence place a bet on him. As a consequence, to balance the book, Pinnacle will lower the odds on Donaldson, which results in higher odds (and a lower implied probability) for Karlovic.
2017-03-26 09:10:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740310430526733, "perplexity": 1229.6731967851413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189198.71/warc/CC-MAIN-20170322212949-00520-ip-10-233-31-227.ec2.internal.warc.gz"}
http://netdesszert.hu/freebooks/page/24
By Miller G. A. ## Download E-books Finite simple groups: proceedings of an instructional conference organized by the London Mathematical Society PDF By Martin Beynon Powell, Graham Higman ## Download E-books Marches Aleatoires sur les Groupes de Lie (Lecture Notes in Mathematics - Vol 624) (French Edition) PDF By Yves Guivarc'h, Michael Keane, Visit Amazon's Bernard Roynette Page, search results, Learn about Author Central, Bernard Roynett ## Download E-books Methods of Representation Theory: With Applications to Finite Groups and Orders: v. 1 (Pure & Applied Mathematics) PDF By CW Curtis, Irving Reiner ## Download E-books Moment Maps, Cobordisms, and Hamiltonian Group Actions (Mathematical Surveys and Monographys, Vol. 98) PDF By Victor Guillemin This study monograph offers many new leads to a quickly constructing quarter of significant present curiosity. Guillemin, Ginzburg, and Karshon exhibit that the underlying topological thread within the computation of invariants of G-manifolds is a final result of a linearization theorem concerning equivariant cobordisms. The booklet features a novel strategy and showcases fascinating new learn. over the last twenty years, localization'' has been one of many dominant topics within the quarter of equivariant differential geometry. ordinary effects are the Duistermaat-Heckman conception, the Berline-Vergne-Atiyah-Bott localization theorem in equivariant de Rham thought, and the quantization commutes with reduction'' theorem and its numerous corollaries. To formulate the concept those theorems are all outcomes of a unmarried end result concerning equivariant cobordisms, the authors have constructed a cobordism idea that enables the items to be non-compact manifolds. A key element during this non-compact cobordism is an equivariant-geometrical item which they name an abstract second map''. this can be a normal and significant generalization of the idea of a second map happening within the concept of Hamiltonian dynamics. The e-book encompasses a variety of appendices that come with introductions to right group-actions on manifolds, equivariant cohomology, Spin${^\mathrm{c}}$-structures, and strong complicated constructions. it's aimed toward graduate scholars and study mathematicians attracted to differential geometry. it's also compatible for topologists, Lie theorists, combinatorists, and theoretical physicists. Prerequisite is a few services in calculus on manifolds and simple graduate-level differential geometry. The 'storm soldiers' of the Luftwaffe, the elite Strumgruppen devices comprised the main seriously armed and armoured fighter interceptors ever produced through the Germans. Their function used to be to ruin like a robust fist during the massed ranks of USAAF sunlight bombers. merely volunteers may well serve with those elite devices, and every pilot was once informed to shut with the enemy and have interaction him in tremendous short-range strive against, attacking from front and the rear in tight arrowhead formations. In unprecedented conditions pilots may even ram their enemy. This publication chronicles the short, yet violent, occupation of the Sturmgruppen through the darkish days of 1944-45, using first-hand bills and infrequent archival images. By Jack H. Smith Nicknamed the 'Unicorns', the 359th FG was once one of many final teams to reach within the united kingdom for provider within the ETO with the 8th Air strength. First seeing motion on thirteen December 1943, the gang at the beginning flew bomber escort sweeps in P-47s, ahead of changing to the ever-present P-51 in March/April 1944. all through its time within the ETO, the 359th was once credited with the destruction of 351 enemy plane destroyed among December 1943 and will 1945. The exploits of all 12 aces created through the gang are targeted, in addition to the main major missions flown. This publication additionally discusses a number of the markings worn by means of the group's 3 squadrons, the 368th, 369th and 370th FSs By Ji L. ## Download E-books Omnipotent Government: The Rise of the Total State and Total War (Lib Works Ludwig Von Mises CL) PDF Published in 1944, in the course of international warfare II, Omnipotent Government was once Mises's first booklet written and released after he arrived within the usa. during this quantity Mises offers in financial phrases a proof of the overseas conflicts that prompted either international wars. even though written greater than part a century in the past, Mises's major topic nonetheless stands:  govt interference within the financial system ends up in conflicts and wars. in response to Mises, the final and most sensible desire for peace is liberalism—the philosophy of liberty, loose markets, restricted executive, and democracy. Ludwig von Mises (1881–1973) was the top spokesman of the Austrian tuition of economics all through lots of the 20th century. He earned his doctorate in legislation and economics from the collage of Vienna in 1906. In 1926, Mises based the Austrian Institute for enterprise Cycle study. From 1909 to 1934, he used to be an economist for the Vienna Chamber of trade. ahead of the Anschluss, in 1934 Mises left for Geneva, the place he was once a professor on the Graduate Institute of foreign reports till 1940, whilst he emigrated to manhattan urban. From 1948 to 1969, he used to be a vacationing professor at ny University. Bettina Bien Greaves is a former resident pupil, trustee, and longtime employees member of the root for monetary schooling. She has written and lectured generally on issues of loose marketplace economics. Her articles have seemed in such journals as Human occasions, Reason, and The Freeman: principles on Liberty. A scholar of Mises, Greaves has turn into a professional on his paintings particularly and that of the Austrian institution of economics ordinarily. She has translated a number of Mises monographs, compiled an annotated bibliography of his paintings, and edited collections of papers through Mises and different participants of the Austrian School.
2018-03-18 17:38:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26429182291030884, "perplexity": 6633.550137685053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00739.warc.gz"}
https://cstheory.stackexchange.com/questions/11997/is-tensor-rank-is-in-vnp
# Is tensor rank is in VNP? Is it known if tensor rank of three dimensional tensors lies in VNP (non deterministic valiant class)? If yes, what is known about high dimensional tensor rank? In fact I am interested in much more simple problem. I would like to know if one can construct class non-zero polynomials $f_n$ which lies in VNP, in $n^3$ variables such that $f_i(T)=0$ if tensor rank of $T$ less than $n^{1.9}$. For simplicity let us assume that we are working over $\mathbb{C}$. I would like to mention that it is O.K. if $f_i(T)=0$ for $T$ of high rank only what I need is that $f_i(T)=0$ for all small rank tensors. The collection of tensors of a given rank, or even of tensors with rank at most $k$ is not a (Zariski-)closed set, so it cannot be described as the vanishing locus of any set of polynomials, regardless of their complexity. (However, over finite fields tensor-rank is $NP$-complete and over $\mathbb{Q}$ it is $NP$-hard but not known to be in $NP$. But these are the usual Boolean classes, not the Valiant analogues.) The closure of the the set of tensors of rank at most $k$ is the set of tensors of border-rank at most $k$. Call a set of polynomials whose vanishing locus is the set of tensors of border-rank at most $k$ a system of (set-theoretic) defining equations for border rank at most $k$. Such defining equations are known for small $k$, but for most $k$ finding such defining equations is a long-standing open problem, related the border-rank and multiplicative complexity of matrix multiplication. • Thanks for answer. I would like just to note that it will be O.K. if $f(T)=0$ for $T$ of high rank I need only that $f(T)=0$ on all small rank tensors. – Klim Jul 11 '12 at 3:47 • @Klim: Presumably you also want $f$ to be not the zero function... Beyond that, is there some additional nontriviality condition you want $f$ to have, for example, that $f$ depend on all $n^{3}$ of its inputs? (If so you might add that clarification to the question.) – Joshua Grochow Jul 11 '12 at 4:18 • No $f$ may not depend on all its inputs. – Klim Jul 11 '12 at 5:09
2019-11-14 06:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353167414665222, "perplexity": 267.95158561060714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00471.warc.gz"}
https://www.gamedev.net/forums/topic/624031-2d-isometric-screen-to-tile-coordinates/
2D isometric: screen to tile coordinates This topic is 2272 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts I'm writing an isometric 2D game and I'm having difficulty figuring precisely on which tile the cursor is. Here's a drawing: [Not allowed to post pictures :( here's a link : http://imageshack.us.../tilespace.png/ ] where xs and ys are screen coordinates (pixels), xt and yt are tile coordinates, W and H are tile width and tile height in pixels, respectively. The best I could figure out so far is this: int xtemp = xs / (W / 2); int ytemp = ys / (H / 2); int xt = (xs - ys) / 2; int yt = ytemp + xt; This seems almost correct but is giving me a very imprecise result, making it hard to select certain tiles, or sometimes it selects a tile next to the one I'm trying to click on. I don't understand why and I'd like if someone could help me understand the logic behind this. Thanks! Share on other sites I wrote an article on the subject a while ago: http://www.wildbunny.co.uk/blog/2011/03/27/isometric-coordinate-systems-the-modern-way/ Hope it helps! Cheers, Paul. Share on other sites Use a mouse map to make things way easier: Isometric 'n' Hexagonal Maps Part I (Skip down to the part labeled 'Mouse Matters' and things will become clear) Share on other sites I wrote an article on the subject a while ago: http://www.wildbunny...the-modern-way/ Hope it helps! Cheers, Paul. Wow, such a simple approach. I ended up using a transformation matrix composed of a translation, a rotation and a scaling, so that getting going back-and-forth between pixels and tile coordinates is as simple as applying the transformation or its inverse; but this seems even simpler. 1. 1 Rutin 24 2. 2 3. 3 JoeJ 20 4. 4 5. 5 • 9 • 46 • 41 • 23 • 13 • Forum Statistics • Total Topics 631749 • Total Posts 3002031 ×
2018-07-18 22:34:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1841726005077362, "perplexity": 2584.242347126498}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00397.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-7-trigonometric-identities-and-equations-chapter-7-test-prep-review-exercises-page-737/50
## Precalculus (6th Edition) Published by Pearson # Chapter 7 - Trigonometric Identities and Equations - Chapter 7 Test Prep - Review Exercises - Page 737: 50 #### Answer $2\cos^3 x-\cos x=\frac{\cos^2 x-\sin^2 x}{\sec x}$ #### Work Step by Step Start with the right side: $\frac{\cos^2 x-\sin^2 x}{\sec x}$ Rewrite in terms of sine and cosine: $=\frac{\cos^2 x-\sin^2 x}{\frac{1}{\cos x}}$ Multiply top and bottom by $\cos x$: $=\frac{\cos^2 x-\sin^2 x}{\frac{1}{\cos x}}*\frac{\cos x}{\cos x}$ $=\cos x*(\cos^2 x-\sin^2 x)$ Rewrite $\sin^2 x$ as $1-\cos^2 x$: $=\cos x*(\cos^2 x-(1-\cos^2 x))$ Simplify: $=\cos x*(\cos^2 x-1+\cos^2 x)$ $=\cos x*(2\cos^2 x-1)$ $=2\cos^3 x-\cos x$ Since this equals the left side, the identity has been proven. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-11-21 01:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7246919274330139, "perplexity": 1057.4206146998918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746926.93/warc/CC-MAIN-20181121011923-20181121033923-00091.warc.gz"}
https://www.physicsforums.com/threads/d-e-boundary-value-problem.674490/
# D.E.: Boundary Value Problem 1. Feb 25, 2013 ### Jeff12341234 I'm not sure if my answer is correct. Did I make a mistake somewhere? I'm not sure the ± needs to be there. 2. Feb 25, 2013 ### HallsofIvy Staff Emeritus How in the world did you get a quadratic equation out of this? $y(2)= (C_1+ 6C_2)e^6= e^6$ and $y'(1)= (3C_1+ 4C_2)e^3= e^3$. The derivative is $y'= 3C_1e^{3x}+ C_2e^{3x}+ 3C_2xe^{3x}= ([3C_1+ C_2]+ 3C_2x)e^x$. It does not involve "$C_1C_2$"! You have $C_1+ 6C_2= 1$ and $3C_1+ 4C_2= 1$, two linear equations. 3. Feb 25, 2013 ### Jeff12341234 c1 is represented by c, c2 is represented by d That's y' I did make an error by leaving out the + sign between c1 and c2 for y' That makes c1 = -1 and c2 = 1 Last edited: Feb 25, 2013
2017-08-22 12:15:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.569328784942627, "perplexity": 1101.5720442854742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00186.warc.gz"}
https://www.intechopen.com/books/biosensors-for-health-environment-and-biosecurity/multiplexing-capabilities-of-biosensors-for-clinical-diagnostics/
InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy. Engineering » Biomedical Engineering » "Biosensors for Health, Environment and Biosecurity", book edited by Pier Andrea Serra, ISBN 978-953-307-443-6, Published: July 19, 2011 under CC BY-NC-SA 3.0 license. © The Author(s). # Multiplexing Capabilities of Biosensors for Clinical Diagnostics By Johnson Kk Ng and Samuel Chong DOI: 10.5772/17187 Article top ## Overview Figure 1. Agilent's inkjet printing technology for oligonucleotide synthesis on 2D microarrrays A: the first layer of nucleotides is deposited on the activated microarray surface. B: growth of the oligos is shown after multiple layers of nucleotides have been precisely printed. C: close-up of one oligo as a new base is being added to the chain, which is shown in figure D. (Courtesy of Agilent Technologies. All rights reserved). Figure 2. Schematic diagram showing improved DNA hybridization onto a dendron-modified substrate as compared to that of a normal substrate. Figure 3. SEM images of the nanopillars fabricated on silicon-based biosensors. (a) Single-substrate nanopillars consisting SiO2. (b) Dual-substrate nanopillars consisting SiO2 layer atop the Si pillar. (c) Very high-aspect ratio dual-substrate nanopillars. (d) Dense array of ordered dual-substrate nanopillars. Scale bars are all 500 nm. Figure 4. a) A set of 100 distinguishable bead types can be created by mixing precise proportions of two fluorescent dyes, and subsequently detected using a flow cytometer with two laser beams. (Courtesy of Luminex Corporation. All rights reserved). (b) Quantum dot nanocrystals of 10 different emission colors incorporated into the beads to create spectrally distinguishable types. (Adapted by permission from Macmillan Publishers Ltd: Nature Biotechnology, copyright 2001). Figure 5. Illumina's BeadArray (top panel) and Veracode technology (bottom panel). (Courtesy of Illumina. All rights reserved) Figure 6. Schematic representation of the spatially addressable bead-based biosensor (Adapted from Ng et al, 2010, copyright Elsevier Inc). Figure 8. Allele-specific hybridization on the device. (A) Typical example of the beads spotted onto a gel pad. Probe-beads targeting Cd26 wildtype variant were spotted onto a gel pad, followed by those targeting the mutant variant (red arrows). Difference in probe intensities showed sample to be of homozygous Cd26 normal genotype. (B) Signal intensity from the wide-type (■) and mutant (■) probe-bead targeting each of the six mutations selected for this study. (Adapted from Ng et al, 2010, copyright Elsevier Inc). # Multiplexing Capabilities of Biosensors for Clinical Diagnostics K-K Ng Johnson1 and S Chong Samuel ## 1. Introduction The detection of biomolecules, be it proteins or nucleic acids such as DNA or RNA, is a critical process in biomedical research and clinical diagnostics. With the former, it helps us to unravel the complexity of our human body, and provides important information down at the cellular and sub-cellular level that allows us to better understand what our bodies are comprised off, how they function, how they respond to disease and aging, or why they fail to respond. This information, when applied to clinical diagnostics, help better manage our health and enhance the quality of life. To generate any meaningful or conclusive information for clinical diagnostics, it is often needed to detect several targets simultaneously. Therefore technologies for performing biomolecular detection must be able to interrogate several targets at one time i.e. perform multiplexing. These targets can be proteins or nucleic acid targets from different cellular species, such as for infectious disease diagnosis, or from the same species i.e. along the same genome, such as single-nucleotide polymorphisms (SNPs) genotyping for pharmacogenomics. It can also be for identifying aberrant biomolecules from normal ones, such as mutation detection in cancer diagnostics and prognostics. Therefore having a platform capable of performing multiplexed biological detection is an indispensable tool for accurate clinical diagnostics. Through advancement in molecular biology as well as in areas such as microelectronics, microfabrication, material science, and optics, there have been a proliferation of miniaturized platforms, or biosensors, for performing biological analysis based on a variety of multiplexing technologies. These ranged from those capable of detecting a few targets to those capable of interrogating hundreds or even thousands of targets. Here we attempt to provide a concise overview of such technologies, as well as provide some insight into a simple technology that we developed in-house. Due to the enormous amount of progress in this area, this is by no means a comprehensive overview. ## 2. Review of current technologies ### 2.1. Solution-based One of the most widely used technologies for multiplexed detection involves performing the detection within a single homogeneous solution. The best example of this is the multiplexed polymerase chain reaction (PCR). PCR, which is one of the most common techniques used in molecular biology, involves using a pair of primers to amplify a certain fragment of a target DNA or RNA manifold, until there is sufficient amount for detection or further downstream analysis. In multiplex PCR, several pairs of primers are used to simultaneously amplify different fragments. It is relatively easy to perform multiplexing in PCR, because the primers can first be designed to amplify fragments of different sizes, and these fragments can then be detected based on their size differences, either using gel electrophoresis or high-resolution melting on real-time PCR systems. Alternatively, the different fragments can also be targeted by different probes conjugated to fluorescent dyes of a specific color. Upon hybridizing to the targets, the probes emit an optical signal corresponding to their dye, which is detected in a real-time PCR system. Multiplex PCR is one of the most common techniques used in clinical diagnostics because the technology has matured significantly since its invention almost three decades ago. This is also rather easy to implement on biosensors, as the process can be carried out in microchambers (Merritt, 2010), or coupled to a capillary electrophoretic module (Thaitrong, 2009). The ability to perform multiplexed detection in PCR results from 1. the unique feature in PCR that allows primers to be designed to amplify fragments of different sizes, 2. the ability of the gel electrophoresis or real-time PCR system to differentiate the fragments by size as a result of their difference in electrophoretic mobility or melting temperature, and 3. the ability to differentiate the probes through color-emitting dyes. Probes used in multiplex PCR are conjugated with fluorescent dyes that emit different wavelengths of light, allowing them to be differentially detected. As a result, there is always a need for powerful optical detection, being capable of exciting and detecting one or multiple wavelengths of light. Due to limitations in the number of different wavelengths of light that can be excited and detected, the number of different multiplexed targets that can be detected in a single reaction is generally not high. One way to overcome this limitation is to combine multiplex PCR with other technologies, such as microarrays. ### 2.2. 2-D microarray The development of microarrays is driven by the demand for high throughput multiplexed analysis, such as the mapping of the human genome. This platform enables hundreds of thousands of proteins or DNA probes to be precisely immobilized onto designated locations within a microscopic area of a silicon or glass substrate (Ramsay, 1998; Schena et al, 1995), with the different probes identified through their unique locations. The proteins or oligonuleotides can be immobilized onto the surface using a high precision robotic arrayer or synthesized in-situ using light-directed chemistry. With such high density chips, it becomes possible to perform massively parallel interrogation of a large number of targets, making microarrays a platform of choice for applications such as gene expression analysis (Rahmatpanah, 2009), SNP genotyping (Wang et al, 1998; Lindroos et al, 2001) and transciptome analysis (Li et al, 2006). Since the inception of the microarrays about two decades ago, there has been a host of companies offering the technology commercially. United States-based Affymetrix is one of the first companies to offer commercial oligonucleotide microarrays, with its GeneChip one of the most widely-used microarrays in a variety of applications, such as in prediction of tumour relapse in hepatocecullar carcinoma patients (Roessler, 2010). Other companies include Agilent, which uses inkjet printing for oligo synthesis on its 2D microarrays (Fig. 1), Applied Microarrays and Roche NimbleGen. CombiMatrix's CMOS arrays have addressable electrodes that have been developed for both DNA detection and immunoassays (Gunn, 2010; Cooper, 2010). With the advent of microfabrication technology and with increased competition, the prices of these microarrays have come down significantly over the years, making the technology more accessible to the research and clinical diagnostics community. ### Figure 1. Agilent's inkjet printing technology for oligonucleotide synthesis on 2D microarrrays A: the first layer of nucleotides is deposited on the activated microarray surface. B: growth of the oligos is shown after multiple layers of nucleotides have been precisely printed. C: close-up of one oligo as a new base is being added to the chain, which is shown in figure D. (Courtesy of Agilent Technologies. All rights reserved). ### 2.3. 3-D microarray Despite its high-throughput potential, the 2-D microarray format is restricted by the diffusion-limited kinetics, and electrostatic repulsion between the solution-phase targets and the densely localized solid-phase probes. Furthermore, the amount of probes that can be immobilized on the planar substrate, and hence the sensitivity and signal-to-noise ratio (SNR), is also somewhat limited. The introduction of 3-D microarrays go some way toward overcoming these limitations. These 3-D microarrays comprised of additional microstructures that are fabricated onto planar substrates to provide a high surface-density platform that increases the immobilization capacity of capture probes, enhances target accessibility and reduces background noise interference in DNA microarrays, leading to improved signal-to-noise ratios, sensitivity and specificity. An example of an early 3-D microarray is the gel-based chip (Kolchinsky & Mirzabekov, 2002). The use of an array of nanoliter-sized polyacrylamide gel pads on a glass slide provides distinct 3D microenvironments for the immobilization of oligonucleotides. Compared to planar glass substrates, the gel-based format can be applied with a higher probe concentration of up to 100 fold, thereby increasing the SNR. The near solution-phase interaction between targets and probes within individual gel pads can also potentially alleviate the problems associated with diffusion-limited kinetics. These gel-based microarrays have been successfully demonstrated for the detection of SNPs associated with β-thalassemia mutations (Drobyshev et al, 1997), and for the identification of polymorphisms in the human mu-opioid receptor gene (LaForge et al, 2000). Other 3-D structures fabricated onto planar surfaces include conical dendrons as well as micropillars (Hong et al, 2005). By fabricating conical dendrons, nano-controlled spacings can be created to provide enough room for the target strand to access each probe, thereby creating a reaction format resembling that in a solution (Fig. 2). As a result, the hybridization time can be reduced to significantly to allow effective discrimination of single-nucleotide mismatches (Hong et al, 2005). ### Figure 2. Schematic diagram showing improved DNA hybridization onto a dendron-modified substrate as compared to that of a normal substrate. Ramanamurthy et al (2008) reported the fabrication of ordered, high-aspect ratio nanopillar arrays on the surface of silicon-based chips to enhance signal intensity in DNA microarrays (Fig. 3). These 150-nm diameter nanopillars were found to enhance the hybridization signals by up to 7 times as compared to flat silicon dioxide substrates. In addition, hybridization of synthetic targets to capture probes that contained a single-base variation showed that the perfect matched duplex signals on dual-substrate nanopillars can be up to 23 times higher than the mismatched duplex signals. The Z-Slides microarray from United States-based company Life Bioscience comprises micropillars and nanowells to enhance spot morphology and eliminate cross-talk between probe sites. By detecting only the pillar surfaces which are several hundred microns from the base, background noise is removed from the microarray scan. A 3-D microarray which is markedly different from the above-mentioned approaches involves immobilizing oligonucleotide probes onto a single thread instead of a planar substrate (Stimpson et al, 2004). The thread is subsequently wound around a core to form a compact, high-density SNP detection platform. Hybridization can be carried out by immersing the thread-and-core structure into a target solution, and completed within approximately 30 min. This platform has been demonstrated for the analysis of SNPs in CYP2C19, an important cytochrome P450 gene (Tojo et al, 2005). ### Figure 3. SEM images of the nanopillars fabricated on silicon-based biosensors. (a) Single-substrate nanopillars consisting SiO2. (b) Dual-substrate nanopillars consisting SiO2 layer atop the Si pillar. (c) Very high-aspect ratio dual-substrate nanopillars. (d) Dense array of ordered dual-substrate nanopillars. Scale bars are all 500 nm. ### 2.4. Bead microarray One of the best examples of 3-D microarrays, and perhaps also one of the most successful commercially available platforms, is the bead microarray. Unlike 2-D microarrays, the high surface-to-volume ratio of beads allows a larger amount of probes to be immobilized to improve the detection signals and signal-to-noise ratios. The small size of beads can further reduce the reaction volume, and the use of microfluidics in bead arrays can shorten the hybridization time to < 10 min, a 50 to 70-fold reduction as compared to conventional microarrays (Ali et al, 2003). Unlike 2-D or the 3-D microarrays discussed, probes are usually conjugated onto the beads prior to them being immobilized onto the microarrays. The major challenge, therefore, in developing bead arrays is to identify the identities or their corresponding immobilized probes of those randomly assembled beads in multiplexed analyses. The most common strategy is to encode beads with colorimetric signatures using semiconductor nanocrystals, visible dyes or fluorophores, and subsequently decode them through visual or fluorescence detection (Mulvaney et al, 2004). Color-encoded beads are produced by embedding them with semiconductor nanocrystals, visible dyes, or fluorophores and subsequently decoded through visual or fluorescence detection. For example, Li et al (2001) mixed blue, green and orange fluorophores to yield 39 different codes for encoding 3.2 μm-diameter polystyrene beads assembled onto a wafer. Alternatively, two fluorophores can be mixed in different proportions to yield 100 distinguishable bead types that are subsequently decoded using two laser beams, as in the Luminex xMAP technology (Dunbar, 2006) (Fig. 4). The emission characteristics of organic fluorescent dyes are affected by changes in temperature, which may result in some bias when used in temperature-dependent studies (Liu et al, 2005). The fluorescent dyes also suffer from photobleaching and this can significantly affect the discriminability between color codes, particularly if they are distinguished by the difference in their intensities. Quantum dots, which are photostable, have size-tunable emission wavelengths, and can be excited by a single wavelength to emit different colors at one time, are widely used to distinguish beads. Han et al. (2001) incorporated quantum dots at different intensities and colors to yield spectrally distinguishable polymeric beads of up to 10 distinct types (Fig. 4). Using 5-6 colors, each at 6 intensity levels, it is possible to achieve up to 40 000 codes using this approach, although this has yet to be demonstrated. These techniques for color encoding beads are straightforward in that the color-emitting agents are directly impregnated into the beads. However, this also means that the encoder signals cannot be removed, resulting in possible interference between the encoder and reporter signals. To avoid this, the number of reporter dyes available for use would inadvertently be reduced. Also, encoding the beads into unique color codes is challenging as the color-emitting agents must be mixed in precise proportions. The difficulty in distinguishing a large number of color codes further means that only up to 100 color codes have been demonstrated so far, limiting them to low or medium throughout applications (Xu et al, 2003; Li et al, 2004). ### Figure 4. a) A set of 100 distinguishable bead types can be created by mixing precise proportions of two fluorescent dyes, and subsequently detected using a flow cytometer with two laser beams. (Courtesy of Luminex Corporation. All rights reserved). (b) Quantum dot nanocrystals of 10 different emission colors incorporated into the beads to create spectrally distinguishable types. (Adapted by permission from Macmillan Publishers Ltd: Nature Biotechnology, copyright 2001). Beads within an array can also be individually addressed using barcodes. A graphical barcode can also be written inside fluorescently dyed beads through a technique termed “spatial selective photobleaching of the fluorescence” (Braeckmans, 2001). Using a specially adapted laser scanning confocal microscope, any sort of pattern can be photobleached at any depth inside the fluorescently dyed bead. This technique was used to photobleach a barcode of different band widths onto 45 μm-diameter fluorescent beads. The advantages of this technique are that only a single fluorescent dye is needed in the encoding scheme, and the number of codes achievable is virtually unlimited. However, there is still the problem of interference between the encoder and reporter fluorescence signals, while the effects of photobleaching during the decoding stage might alter or degrade the barcode. A widely used bead microarray platform for biological detection and clinical diagnostics is the commercial BeadArray from Illumina, a market leader in high-throughput bead microarrays. It assembles 3-micron silica beads onto a fiber optic of planar silica slides, for a range of DNA and RNA analyses. There is also the Veracode technology, which uses digital holographic barcode to identify the beads (Lin et al, 2009) (Fig. 5). When excited by a laser, each microbead, which has a pillar-like rather than spherical shape, emits an image resembling a barcode. Using this method, it becomes possible to have virtually unlimited number of different bead types. The platform can be applied to both protein-based or DNA-based assays. ### Figure 5. Illumina's BeadArray (top panel) and Veracode technology (bottom panel). (Courtesy of Illumina. All rights reserved) ## 3. A simple spatially addressable bead-based biosensor #### Figure 6. Schematic representation of the spatially addressable bead-based biosensor (Adapted from Ng et al, 2010, copyright Elsevier Inc). ### 3.1. Biosensor fabrication The biosensor consisted an array of 19 x 24 polyacrylamide gel pads fabricated on a glass slide (Corning, Corning, NY) pre-treated with Bind Silane (GE Healthcare, Piscataway, NJ). The gel pads had horizontal and vertical pitch of 300 μm, and each gel pad further comprised a 10 x 10 array of micropillars (10x10x10 μm) with horizontal and vertical pitch of 10 μm (Fig. 7). A photopolymerization process described previously was used to create the array of gel pads (Proudnikov et al., 1998), after which the glass slide was treated in 0.1M NaBH4 for 30 min to reduce gel pads auto-fluorescence. ### 3.2. Oligonucleotide probes and targets The six common South-east Asian beta-globin gene mutations selected for this study were -28 A→G, -29 A→G, IVSI5 G→C, IVSI1 G→T, Cd26 GAG→AAG, and IVSII654 C→T. For each mutation, allele-specific probes were designed to hybridize with perfect complementary to either the wildtype or mutant variant (Table 1). A biotin moiety was added to the 5’ end of each probe, and conjugation of probes to 9.95 µm streptavidin-modified polystyrene beads was carried out according to previously described protocol (Ng et al., 2008). PCR was carried out to amplify two fragments of the beta-globin gene, with the first fragment (319 bp) encompassing the Exon 1 which incudes all the targeted mutations other than IVSII654 C→T, which was contained in the second fragment (128 bp). Primer sequences were: Frag1-F: 5’-Cy3-ACggCTgTCATCACTTAgAC-3’ (Genbank HUMHBB sequence 62010-62029); Frag1-R: 5’-CCCAgTTTCTATTggTCTCC-3’ (HUMHBB sequence 62328-62309); Frag2-F: 5’- Cy3-TgTATCATgCCTCTTTgCACC-3’ (HUMHBB sequence 63227-63247); and Frag2-R: 5’-CAATATgAAACCTCTTACATCAg-3’ (HUMHBB: 63354-63332). Genomic DNA (100 ng) was amplified in a total volume of 50 µL containing 0.5 µM each of the two sets of primers, 200 µM of each deoxynucleotide triphosphate, and 1 U of HotStarTaq DNA polymerase in 1× supplied PCR buffer (Qiagen). Amplification was carried out in an iCycler thermal cycler (BioRad) with an initial denaturation at 95 °C for 15 min, followed by 35 cycles at 98 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s, and a final extension at 72 °C for 5 min. Products were then re-amplified with only the forward primers to generate ssDNA for allele-specific hybridization. Probe name Mutation targeted Sequence (5’-3’) -28,-29_WT -28/-29 WT CCTgACTTTTATgCCCAg -28_MT -28 MT CCTgACTTCTATgCCCAg -29_MT -29 MT CCTgACTTTCATgCCCAg IVSI5,1_WT IVSI5/1 WT CTTgATACCAACCTgCCC IVSI5_MT IVSI5 MT CTTgATAgCAACCTgCCC IVSI5_WT IVSI1 MT CTTgATACCAAACTgCCC Cd26_WT Cd26 WT gggCCTCACCACCAAC Cd26_MT Cd26 MT gggCCTTACCACCAAC IVSII654_WT IVSII654 WT TTgCTATTgCCTTAACCC IVSII654_MT IVSII654 MT TTgCTATTACCTTAACCC ### Table 1. WT: wild-type, MT: mutant Probe sequences for targeting each of the beta-globin gene mutations selected for this study. (Adapted from Ng et al, 2010, copyright Elsevier Inc). ### 3.3. Hybridization and signal detection Re-amplified PCR products were purified using the Microcon YM-30 filter device (Millipore) before being diluted to a 10 µL hybridization solution containing 500 mM NaCl and 30% formamide. Hybidization was carried out by pipetting the solution over the spotted beads. After 30 min incubation, the device was rinsed briefly with a solution containing only 500 mM NaCl and 30% formamide, and signal capture was carried out by fluorescence imaging. The imaging system comprised an epifluorescence microscope (BX51, Olympus), 100 W mercury lamp and fluorescence filter set 41007 (Chroma Technology). MetaMorph 5.0 (Molecular Devices) was used to control acquisition of 12-bit monochrome bead images at 2 s exposure from a SPOT-RT Slider cooled-CCD camera (Diagnostic Instruments), and bead signals were quantitated using the modified version of a software developed in-house previously (Ng and Liu, 2005). ### 3.4. Results and discussion To demonstrate detection of the six beta-globin gene mutations, six human samples heterozygous for -28 A→G, -29 A→G, IVSI5 G→C, IVSI1 G→T, Cd26 GAG→AAG, and IVSII654 C→T, and one homozygous for IVSII654 C→T were analyzed using the bead-based biosensor. All samples were genotyped previously by direct sequencing or multiplexed minisequencing (Wang et al., 2003). Wildtype and mutant probes targeting each mutation were conjugated to distinct bead sets, spotted onto a particular gel pad on the device, and distinguished based on their spatial addresses (Fig. 8A). Probes were designed with the targeted mutation as near as possible to its centre region, in order to increase the discrimination between matched and mismatched duplexes. Due to the proximity between the -28 and -29 mutations, as well as between the IVSI1 and IVSI5 mutations, each pair of mutations must be detected simultaneously on a single gel pad by four sets of probes to cover all possible genotypes. However, due to the lack of samples compound heterozygous for -28/-29 and IVSI1/IVSI5, only three sets of probes were required in this study for each pair of mutations. Fig. 8B shows the signal intensity from the wildtype and mutant probes used to target each mutation. All seven different samples were correctly genotyped using the device. For heterozygous mutations, signal intensities from the wildtype probes did not differ significantly from that of the mutant probes, attaining student t-test p-values > 0.05 for all except IVSII654 which had a slightly lower p-value of about 0.01. In the absence of a mutation, the wildtype probe intensities were significantly higher than that of the mutant probes, with p-values far lower than 0.001. For the homozygous IVSII654 mutation, the mutant probe intensity was significantly higher than the wildtype probe, attaining a p-value < 0.0001. This similarity or significant difference between wildtype and mutant probe intensities allowed correct identification of the heterozygous mutant and homozygous wildtype (or mutant) samples respectively. The spatially addressable bead-based biosensor offers an alternative tool for simple yet efficient and rapid detection of beta-thalassemia mutations. The device is comprised simply of a glass slide fabricated with a thin polyacrylamide matrix on its surface using a photopolymerization process that is faster (~ 45 min) and far less complicated than conventional photolithographic techniques for making silicon chips. The main advantage of the device is its ability to distinguish different bead types without the need for prior time-consuming and laborious techniques such as color-encoding (Braeckmans et al., 2001). This is due to the natural immobilization of the beads to the polyacrylamide gel pads, thus allowing the beads to acquire unique spatial addresses. Detection is achieved by applying the solution of PCR-amplified targets over the region of the spotted beads for passive hybridization to occur, which obviates the need for microfluidic mixing and thus microchannels. This further simplifies the fabrication process, lowers the cost of the device, ### Figure 8. Allele-specific hybridization on the device. (A) Typical example of the beads spotted onto a gel pad. Probe-beads targeting Cd26 wildtype variant were spotted onto a gel pad, followed by those targeting the mutant variant (red arrows). Difference in probe intensities showed sample to be of homozygous Cd26 normal genotype. (B) Signal intensity from the wide-type (■) and mutant (■) probe-bead targeting each of the six mutations selected for this study. (Adapted from Ng et al, 2010, copyright Elsevier Inc). and reduces the sample volume required (< 10 µL). Despite the lack of microfluidic mixing, detection is achieved in 30 min, although this might possibly be even faster, given that we have achieved hybridization on this device within 10 min, albeit with synthetic targets (Ng et al., 2008). ## 4. Conclusion The advent of biosensors has allowed biomedical research and clinical diagnostics to leverage upon the advantages of miniaturization, such as reduced sample volumes, faster reaction times, and the possibility of multiplexed detection. The last point is of particular importance, since the simultaneous detection of multiple targets at once has resulted in significant time savings, particularly for applications requiring high-throughput. Often, multiple targets must be detected in order to draw any meaningful conclusion in clinical diagnosis. So much progress has been made in this field such that it is now possible to utilize high throughput platforms such as microarrays to interrogate thousands of targets at once. The crucial role played by these technologies, such as multiplex PCR and the various forms of 2D, 3D and bead-based microarrays, in the past decades is indisputable, and will continue to be so. However several challenges exist. First, it is important to reduce the cost of some of these technologies so as to make it more affordable, particularly for clinical diagnostics. For example, systems for real-time PCR can be quite costly, due in part to the high precision optical detection modules found within. With advances in optics, both light sources (e.g. LEDS) and detectors (e.g. digital cameras) are getting more affordable, which would help to bring down the costs of such systems. Also, part of the costs are attributable to the licensing issues. Manufacturers of real-time PCR systems and reagents have to pay a license fee including royalties to the original patent owners. With time, some of the patent protections will expire soon, so prices should also come down, as in the case of the patent expiry of the Taq polymerase in 2006. The manufacturing costs for microarrays and its bead-based counterpart are also high. Hopefully with advances in manufacturing technologies, the cost can eventually be reduced. Second, it is important for these technologies to be of sufficient sensitivity and specificity in order to meet the standards required in clinical diagnostics. Real-time PCR has no problems with that, since it is not uncommon for it to achieve a sensitivity and specificity close to 100%. 2-D microarrays, on the other hand, might face more of a challenge. The diffusion-limited kinetics, steric hindrances and high noise contributed by the planar surface might somewhat affect sensitivity and specificity. It is important to ascertain that the microarrays can reproducibly meet the required levels of sensitivity and specificity before its application to clinical diagnostics. Third, the reaction times for some applications can still be rather high, particularly for the microarrays. It is desirable to reduce these times further since clinical diagnostics often require a fast turn around time to minimize patient anxiety and to aid decision making in disease management. Finally, with the advent of modern technologies, some of the multiplexing technologies discussed here might find themselves being slowly displaced. Sequencing is a method used to decipher the order of bases along a DNA. Traditionally slow, it is now possible to perform massively parallel sequencing on high-throughput platforms to speed up its rate. Known as next generation sequencing, thousands of sequences can now be generated at once, using commercial sequencers from companies such as Illumina (Solexa), Roche (454) and Applied Biosystems. Some of these platforms, like the SOLiD system from Applied Biosystems, can generate up to 60 gigabases of DNA sequence per run. With these advances in next generation sequencing comes the race for rapid and low cost full genome sequencing. The Archon X Prize for Genomics was established in October 2006 to award US$10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than$10,000 per genome”. As of January 2011, the prize is yet unclaimed. However, the possibility of being able to sequence an entire human genome accurately, cheaply and rapidly in future might supplant some of today’s multiplexing technologies like the DNA microarray. In summary, multiplexing capabilities in biosensors have come a long way and will continue to advance rapidly in the next decade, with a large number of companies pouring in large sums of monies into research and development. The ideal platform will be one offering high-throughput, rapid and low cost diagnostics. Whether that can be realised in the near future remains to be seen. ## References 1 - G. Ramsay, (1998). DNA chips: State-of-the art. Nat Biotech 16, 40. 2 - M. F. Ali, R. Kirby, A. P. Goodey, M. D. Rodriguez, A. D. Ellington, D. P. Neikirk, J. T. Mc Devitt, 2003 DNA hybridization and discrimination of single-nucleotide mismatches using chip-based microbead arrays. Anal Chem 75 4732 4739 . 3 - K. Braeckmans, 2001 A new generation of encoded microcarriers. Drug Discovery Technology 12 17 Aug, Boston 4 - J. Chen, M. A. Iannone, M.S. Li, J. D. Taylor, P. Rivers, A. J. Nelsen, K. A. Slentz-Kesler, A. Roses, M. P. Weiner, 2000 A Microsphere-Based Assay for Multiplexed Single Nucleotide Polymorphism Analysis Using Single Base Chain Extension. Genome Res. 10 549 557 . 5 - J. Cooper, N. Yazvenko, K. Peyvan, K. Maurer, C. R. Taitt, W. Lyon, D. L. Danley, 2010 Targeted deposition of antibodies on a multiplex CMOS microarray and optimization of a sensitive immunoassay using electrochemical detection. PLoS One. 19, e9781 EOF . 6 - C. Daelemans, ME Smits. G. Ritchie, S. Abu-Amero, I. M. Sudbery, MS Campino. S. Forrest, T. G. Clark, P. Stanier, D. Kwiatkowski, P. Deloukas, E. T. Dermitzakis, S. Tavaré, G. E. Moore, I. Dunham, 2010 High-throughput analysis of candidate imprinted genes and allele-specific gene expression in the human term placenta. BMC Genet. 19, 25 EOF 7 - A. Drobyshev, N. Mologina, V. Shik, D. Pobedimskaya, G. Yershov, A. Mirzabekov, 1997 Sequence analysis by hybridization with oligonucleotide microchip: identification of beta-thalassemia mutations. Gene 188 45 52 . 8 - D. C. Duffy, J. C. Mc Donald, O. J. A. Schueller, G. M. Whitesides, 1998 Rapid Prototyping of Microfluidic Systems in Poly(dimethylsiloxane). Anal. Chem. 70 4974 4984 . 9 - S. A. Dunbar, 2006 Applications of Luminex(R) xMAP(TM) technology for rapid, high-throughput multiplexed nucleic acid detection. Clinica Chimica Acta 363, 71. 10 - S. Gunn, I. T. Yeh, I. Lytvak, B. Tirtorahardjo, N. Dzidic, S. Zadeh, J. Kim, C. Mc Caskill, L. Lim, M. Gorre, M. Mohammed, 2010 Clinical array-based karyotyping of breast cancer with equivocal HER2 status resolves gene copy number and reveals chromosome 17 complexity. BMC Cancer. 28, 396 EOF 11 - M. Han, X. Gao, J. Z. Su, S. Nie, 2001 Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules. Nat Biotechnol 19 631 635 . 12 - B. J. Hong, S. J. Oh, T. O. Youn, S. H. Kwon, J. W. Park, 2005 Nanoscale-controlled spacing provides DNA microarrays with the SNP discrimination efficiency in solution phase. Langmuir 21 4257 4261 . 13 - B. J. Hong, V. Sunkara, J. W. Park, 2005 DNA microarrays on nanoscale-controlled surface. Nucl. Acids Res. 33, e106 EOF . 14 - M. A. Iannone, J. D. Taylor, J. Chen, M. S. Li, P. Rivers, K. A. Slentz-Kesler, M. P. Weiner, 2000 Multiplexed single nucleotide polymorphism genotyping by oligonucleotide ligation and flow cytometry. Cytometry 39 131 140 . 15 - A. Kolchinsky, A. Mirzabekov, 2002 Analysis of SNPs and other genomic variations using gel-based chips. Hum Mutat 19 343 360 . 16 - K. S. La Forge, V. Shick, R. Spangler, D. Proudnikov, V. Yuferov, Y. Lysov, A. Mirzabekov, M. J. Kreek, 2000 Detection of single nucleotide polymorphisms of the human mu opioid receptor gene by hybridization or single nucleotide extension on custom oligonucleotide gelpad microchips: potential in studies of addiction. Am J Med Genet 96 604 615 . 17 - A. X. Li, M. Seul, J. Cicciarelli, J. C. Yang, Y. Iwaki, 2004 Multiplexed analysis of polymorphisms in the HLA gene complex using bead array chips. Tissue Antigens 63 518 528 . 18 - Y. Li, D. Elashoff, M. Oh, U. Sinha, John. M. A. St, X. Zhou, E. Abemayor, D. T. Wong, 2006 Serum circulating human mRNA profiling and its utility for oral cancer detection. J Clin Oncol. 24, 1754 EOF 60 EOF . 19 - C. H. Lin, J. M. Yeakley, T. K. Mc Daniel, R. Shen, 2009 Medium- to high-throughput SNP genotyping using VeraCode microbeads. Methods Mol Biol. 496 129 42 20 - K. Lindroos, U. Liljedahl, M. Raitio, A. C. Syvanen, 2001Minisequencing on oligonucleotide microarrays: comparison of immobilisation chemistries. Nucleic Acids 29 29, E69-69 21 - W.T Liu, J.H. Wu, E. S.Y. Li, E. S. Selamat, 2005 Emission Characteristics of Fluorescent Labels with Respect to Temperature Changes and Subsequent Effects on DNA Microchip Studies. Appl. Environ. Microbiol. 71 6453 6457 . 22 - A. J. Merritt, T. Keehner, L. C. O’Reilly, R. L. Mc Innes, T. J. Inglis, 2010 Multiplex amplified nominal tandem-repeat analysis (MANTRA), a rapid method for genotyping Mycobacterium tuberculosis by use of multiplex PCR and a microfluidic laboratory chip. J Clin Microbiol. 48 3758 61 23 - S. P. Mulvaney, H. M. Mattoussi, L. J. Whitman, 2004 Incorporating fluorescent dyes and quantum dots into magnetic microbeads for immunoassays. Biotechniques 36, 602 EOF 606, 608-609. 24 - J. K. Ng, W. T. Liu, 2005 LabArray: real-time imaging and analytical tool for microarrays. Bioinformatics. 21 689 690 . 25 - J. K. Ng, E. S. Selamat, W. T. Liu, 2008 A Spatially Addressable Bead-based Biosensor for Simple and Rapid DNA Detection. Biosens Bioelectron. 23 803 810 . 26 - J. K. Ng, W. Wang, W. T. Liu, S. S. Chong, 2010 Spatially addressable bead-based biosensor for rapid detection of beta-thalassemia mutations. Anal Chim Acta. 658 193 196 . 27 - D. Proudnikov, E. Timofeev, A. Mirzabekov, 1998 Immobilization of DNA in polyacrylamide gel for the manufacture of DNA and DNA-oligonucleotide microchips. Anal Biochem. 259 34 41 . 28 - F. B. Rahmatpanah, S. Carstens, S. I. Hooshmand, et al. 2009 Large-scale analysis of DNA methylation in chronic lymphocytic leukemia. Epigenomics. 1, 39 EOF 61 EOF 29 - B. Ramanamurthy, K. K. J. Ng, E. S. Shah, N. Balasubramaniam, W. T. Liu, 2008 Silicon nanopillars substrate for enhancing signal intensity in DNA microarrays. Biosensors and Bioelectronics. 24, 723 30 - S. Roessler, H. L. Jia, A. Budhu, M. Forgues, Q. H. Ye, J. S. Lee, S. S. Thorgeirsson, Z. Sun, Z. Y. Tang, L. X. Qin, X. W. Wang, 2010 A Unique Metastasis Gene Signature Enables Prediction of Tumor Relapse in Early-Stage Hepatocellular Carcinoma Patients Cancer Res 70 10202 10212 . 31 - M. Schena, D. Shalon, R. W. Davis, P. O. Brown, 1995 Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science 270 467 470 . 32 - D. I. Stimpson, S. M. Knepper, M. Shida, K. Obata, H. Tajima, 2004 Three-dimensional microarray platform applied to single nucleotide polymorphism analysis. Biotechnol Bioeng 87 99 103 . 33 - J. D. Taylor, D. Briley, Q. Nguyen, K. Long, M. A. Iannone, M. S. Li, F. Ye, A. Afshari, E. Lai, M. Wagner, J. Chen, M. P. Weiner, 2001 Flow cytometric platform for high-throughput single nucleotide polymorphism analysis. Biotechniques 30, 661 EOF 666, 668-669. 34 - N. Thaitrong, N. M. Toriello, N. Del Bueno, R. A. Mathies, 2009 Polymerase chain reaction-capillary electrophoresis genetic analysis microdevice with in-line affinity capture sample injection. Anal Chem. 81 1371 7 35 - Y. Tojo, J. Asahina, Y. Miyashita, M. Takahashi, N. Matsumoto, S. Hasegawa, M. Yohda, H. Tajima, 2005 Development of an automation system for single nucleotide polymorphisms genotyping using bio-strand, a new three-dimensional microarray. J Biosci Bioeng 99 120 124 . 36 - D. G. Wang, J. B. Fan, C. J. Siao, A. Berno, P. Young, et al. 1998 Large-scale identification, mapping, and genotyping of single-nucleotide polymorphisms in the human genome. Science 280 1077 1082 . 37 - W. Wang, S. K. Kham, G. H. Yeo, T. C. Quah, S. S. Chong, 2003 Multiplex minisequencing screen for common Southeast Asian and Indian beta-thalassemia mutations. Clin Chem. 49 209 218 . 38 - H. Xu, M. Y. Sha, E. Y. Wong, J. Uphoff, Y. Xu, J. A. Treadway, A. Truong, E. O’Brien, S. Asquith, M. Stubbins, N. K. Spurr, E. H. Lai, W. Mahoney, 2003 Multiplexed SNP genotyping using the Qbead system: a quantum dot-encoded microsphere-based assay. Nucleic Acids Res 31, e43 EOF .
2018-03-22 21:07:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46333780884742737, "perplexity": 7780.193910271388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648003.58/warc/CC-MAIN-20180322205902-20180322225902-00274.warc.gz"}
http://aco.math.cmu.edu/abs-18-19/jan31.html
The ACO Seminar (2018–2019) January 31, 3:30pm, Wean 8220 Michael Anastos, Carnegie Mellon University Coloring (random) hypergraphs Abstract: The talk will consist of two parts, both concerning colorings of (random) hypergraphs. 1. Let $W_q$ denote the set of proper $q$-colorings of the random graph $G_{n,m}$, $m = dn/2$ and let $H_q$ be the graph with vertex set $W_q$ where two vertices are connected iff the corresponding proper colorings differ in a single vertex. We show that for sufficiently large $d$, if $q>(1+o(1))d/\log d$ then $H_q$ is connected, providing an asymptotic matching upper bound to the lower bound given by Achliopta and Coja-Oghlan. We then extend our result to random hypergraphs. 2. We study an MCMC algorithm for sampling a (near) uniform q-coloring of a simple k-uniform hypegraph with n vertices and maximum degree D. Here $q>max(C_1(k) \log n, C_2(k)D^{1/k-1}$. This is joint work with Alan M. Frieze. Before the talk, at 3:10pm, there will be tea and cookies in Wean 6220.
2019-05-23 03:10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238730669021606, "perplexity": 653.3597353865559}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257002.33/warc/CC-MAIN-20190523023545-20190523045545-00544.warc.gz"}
http://www.statemaster.com/encyclopedia/Apsis
FACTOID # 14: North Carolina has a larger Native American population than North Dakota, South Dakota and Montana combined. Home Encyclopedia Statistics States A-Z Flags Maps FAQ About WHAT'S NEW RELATED ARTICLES People who viewed "Apsis" also viewed: SEARCH ALL Search encyclopedia, statistics and forums: (* = Graphable) Encyclopedia > Apsis A diagram of Keplerian orbital elements. In astronomy, an apsis, plural apsides (IPA: /apsɪdɪːz/) is the point of greatest or least distance of the elliptical orbit of an astronomical object from its center of attraction, which is generally the center of mass of the system. Image File history File links Orbit. ... Image File history File links Orbit. ... The elements of an orbit are the parameters needed to specify that orbit uniquely, given a model of two ideal masses obeying the Newtonian laws of motion and the inverse-square law of gravitational attraction. ... A giant Hubble mosaic of the Crab Nebula, a supernova remnant. ... For information on how to read IPA transcriptions of English words see here. ... In astrodynamics or celestial mechanics a elliptic orbit is an orbit with the eccentricity greater than 0 and less than 1. ... See also Lists of astronomical objects Category: ... In physics, the center of mass of a system of particles is a specific point at which, for many purposes, the systems mass behaves as if it were concentrated. ... The point of closest approach is called the periapsis or pericentre and the point of farthest excursion is the apoapsis (Greek από, from, which becomes απ before a vowel, and αφ before rough breathing), apocentre or apapsis (the latter term, although etymologically more correct, is much less used). A straight line drawn through the periapsis and apoapsis is the line of apsides. This is the major axis of the ellipse, the line through the longest part of the ellipse. The ellipse and some of its mathematical properties. ... Related terms are used to identify the body being orbited. The most common are perigae and apogae, referring to orbits around the Earth, and perihelion and aphelion, referring to orbits around the Sun (Greek ‘ήλιος hēlios sun). ## Contents There are formulae used to derive apsis and periapsis: In mathematics and in the sciences, a formula (plural: formulae, formulæ or formulas) is a concise way of expressing information symbolically (as in a mathematical or chemical formula), or a general relatx E=mc² (see special relativity). ... • Periapsis: maximum speed $v_mathrm{per} = sqrt{ frac{(1+e)mu}{(1-e)a} } ,$ at minimum distance $r_mathrm{per}=(1-e)a!,$ (periapsis distance) • Apoapsis: minimum speed $v_mathrm{ap} = sqrt{ frac{(1-e)mu}{(1+e)a} } ,$ at maximum distance $r_mathrm{ap}=(1+e)a!,$ (apoapsis distance) where one easily verifies $h = sqrt{(1-e^2)mu a}$ $epsilon=-frac{mu}{2a}$ (each the same for both points, like they are for the whole orbit, in accordance with Kepler's laws of planetary motion (conservation of angular momentum) and the conservation of energy) Johannes Keplers primary contributions to astronomy/astrophysics were his three laws of planetary motion. ... This gyroscope remains upright while spinning due to its angular momentum. ... where: Properties: The semi-major axis of an ellipse In geometry, the term semi-major axis (also semimajor axis) is used to describe the dimensions of ellipses and hyperbolae. ... In astrodynamics, under standard assumptions any orbit must be of conic section shape. ... In astrodynamics specific relative angular momentum () of orbiting body () relative to central body () is the relative angular momentum of per unit mass. ... In astrodynamics the specific orbital energy (or vis-viva energy) of an orbiting body traveling through space under standard assumptions is the sum of its potential energy () and kinetic energy () per unit mass. ... In astrodynamics, the standard gravitational parameter () of a celestial body is the product of the gravitational constant () and the mass : The units of the standard gravitational parameter are km3s-2 Small body orbiting a central body Under standard assumptions in astrodynamics we have: where: is the mass of the orbiting... $e=frac{r_mathrm{ap}-r_mathrm{per}}{r_mathrm{ap}+r_mathrm{per}}=1-frac{2}{frac{r_mathrm{ap}}{r_mathrm{per}}+1}$ Note that for conversion from heights above the surface to distances, the radius of the central body has to be added, and conversely. The arithmetic mean of the two distances is the semi-major axis $a!,$. The geometric mean of the two distances is the semi-minor axis $b!,$. In mathematics and statistics, the arithmetic mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided by the number of items in the list. ... The geometric mean of a set of positive data is defined as the nth root of the product of all the members of the set, where n is the number of members. ... In geometry, the semi-minor axis (also semiminor axis) applies to ellipses and hyperbolas. ... The geometric mean of the two speeds is $sqrt{-2epsilon}$, the speed corresponding to a kinetic energy which, at any position of the orbit, added to the existing kinetic energy, would allow the orbiting body to escape (the square root of the sum of the squares of the two speeds is the local escape velocity). ## Terminology The words "pericentre" and "apocentre" are occasionally seen, although periapsis/apoapsis are preferred in technical usage. Various related terms are used for other celestial objects. The '-gee', '-helion' and '-astron' and '-galacticon' forms are frequently used in the astronomical literature, while the other listed forms are occasionally used, although '-saturnium' has very rarely been used in the last 50 years. The '-gee' form is commonly (although incorrectly) used as a generic 'closest approach to planet' term instead of specifically applying to the Earth. The term peri/apomelasma (from the Greek root) was used by Geoffrey A. Landis in 1998 before peri/aponigricon (from the Latin) appeared in the scientific literature in 2002. See lists of astronomical objects for a list of the various lists of astronomical objects in Wikipedia. ... Geoffrey A. Landis emerged in the late 1980s as one of the foremost scientist-writers in the science fiction genre. ... 1998 (MCMXCVIII) was a common year starting on Thursday of the Gregorian calendar, and was designated the International Year of the Ocean. ... For album titles with the same name, see 2002 (album). ... Body Closest approach Farthest approach Galaxy Perigalacticon Apogalacticon Star Periastron Apastron Black hole Perimelasma/Perinigricon Apomelasma/Aponigricon Sun Perihelion Aphelion[1] Mercury Perihermion Apohermion Venus Pericytherion/Pericytherean/Perikrition Apocytherion/Apocytherean/Apokrition Earth Perigae Apogae Moon Periselene/Pericynthion/Perilune Aposelene/Apocynthion/Apolune Mars Periareion Apoareion Jupiter Perizene/Perijove Apozene/Apojove Saturn Perikrone/Perisaturnium Apokrone/Aposaturnium Uranus Periuranion Apouranion Neptune Periposeidion Apoposedion • In the Moon's case, in practice all three forms are used, albeit very infrequently. The '-cynthion' form is, according to some, reserved for artificial bodies, whilst others reserve '-lune' for an object launched from the Moon and '-cynthion' for an object launched from elsewhere. The '-cynthion' form was the version used in the Apollo Project, following a NASA decision in 1964. • For Venus, the form '-cytherion' is derived from the commonly used adjective 'cytherean'; the alternate form '-krition' (from Kritias, an older name for Aphrodite) has also been suggested. • For Jupiter, the '-jove' form is occasionally used by astronomers whilst the '-zene' form is never used, like the other pure Greek forms ('-areion' (Mars), '-hermion' (Mercury), '-krone' (Saturn), '-uranion' (Uranus), '-poseidion' (Neptune) and '-hadion' (Pluto)). Description Role: Earth and Lunar Orbit Crew: 3; CDR, CM pilot, LM pilot Dimensions Height: 36. ... The Birth of Venus, (detail) by Sandro Botticelli, 1485 Aphrodite (Greek: Ἀφροδίτη, pronounced in English as and in Ancient Greek as ) was the Greek goddess of love, lust, beauty, and sexuality. ... ## Earth's perihelion and aphelion The Earth is closest to the Sun in early January and furthest in early July. The relation between perihelion, aphelion and the Earth's seasons changes over a 25,765 year cycle. This precession of the equinoxes contributes to periodic climate change, Precession of the equinoxes refers to the precession of the Earths axis of rotation. ... It has been suggested that Global warming in popular culture be merged into this article or section. ... The day and hour of these events for the next few years are:[1] Year Perihelion Aphelion 2007 Jan 3 20 Z July 7 00Z 2008 Jan 3 00Z July 4 08Z 2009 Jan 4 15Z July 4 02Z 2010 Jan 3 00Z July 6 11Z 2011 Jan 3 19Z July 4 15Z 2012 Jan 5 00Z July 5 03Z 2013 Jan 2 05Z July 5 15Z 2014 Jan 4 12Z July 4 00Z 2015 Jan 4 07Z July 6 19Z 2016 Jan 2 23Z July 4 16Z UTC redirects here. ... The eccentric anomaly is the angle between the direction of periapsis and the current position of an object on its orbit, projected onto the ellipses circumscribing circle perpendicularly to the major axis, measured at the centre of the ellipse. ... Two bodies with similar mass orbiting around a common barycenter with elliptic orbits. ... Perigee is the point at which an object in orbit around the Earth makes its closest approach to the Earth. ... ## Notes and references 1. ^ Properly pronounced 'affelion' because the Greek is αφήλιον, although the hypercorrection 'ap-helion' is commonly heard. 2. ^ Apsis. Glossary of Terms. National Solar Observatory (February 21, 2005). Retrieved on 2006-09-30. Hypercorrection comprises two linguistic phenomena: elaborate, prescriptively based correction of common usage, often introduced in an attempt to avoid vulgarity or informality, that results in wording commonly considered clumsier than the usual, colloquial usage. ... For the Manfred Mann album, see 2006 (album). ... September 30 is the 273rd day of the year (274th in leap years) in the Gregorian calendar. ... Results from FactBites: NIPP: Artists: Apsis (143 words) The four Apsis band members originally met during their years at Ponderosa High School in Parker, Colorado. Apsis didn’t come to be until well after High School, in August of 2002. Today, Apsis performs at venues in the Denver metro area, playing a diverse set list of original material. Highbeam Encyclopedia - Search Results for apsis (513 words) apsis (pl. apsides) Either of two points in an object's orbit. The closest point to the primary body is known as the periapsis, and the furthest the apapsis. The apsides of the Earth's orbit are its perihelion and aphelion; in the Moon's orbit they are its perigee and apogee. More results at FactBites » Share your thoughts, questions and commentary here
2019-07-16 08:01:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.658679723739624, "perplexity": 2349.5090557095623}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00112.warc.gz"}
http://mathoverflow.net/questions/69292/is-endoscopy-interesting-in-simply-laced-cases
# Is endoscopy interesting in simply-laced cases? Let $G$ be a complex algebraic group, and write $Z(g)$ for the centralizer of a semisimple element $g$ in $G$. I will assume $G$ is simply connected, in which case $Z(g)$ is connected. Let $G^\vee$ and $Z(g)^\vee$ be the Langlands dual complex algebraic groups. If I understand, $Z(g)^\vee$ is called an "endoscopic group" for $G^\vee$, though I've taken all of the arithmetic out of it. Most of the time, $Z(g)$ is a Levi subgroup of $G$, in which case $Z(g)^\vee$ is naturally a Levi subgroup of $G^\vee$. The handful of centralizers that are not Levis were classified by Kac. It's often emphasized that in these more interesting cases the associated endoscopic group is not a subgroup of $G^\vee$. For instance, $Z(g) = SL(2) \times SL(2)$ is a centralizer of an element of order 2 in $G = Sp(4)$, but there is no way to include $Z(g)^\vee = PGL(2) \times PGL(2)$ into $G^\vee = SO(5)$. I just noticed that the endoscopic groups of $G^\vee$ do include into $G^\vee$ whenever $G$ is simply laced. This works out in type A because all centralizers are Levis, in type D essentially because of the self-duality of the groups $SO(2n)$, and in types $E_6,E_7,E_8$ by checking the centralizers one by one. 1. There are 2 interesting centralizers in $E_6$, 4 in $E_7$, and 8 in $E_8$, so plenty of opportunities for me to have made a mistake. Is my claim correct? 2. If correct, is there a simpler explanation for it than a case-by-case check? 3. Is the existence or nonexistence of an endoscopic sub group relevant in the kinds of endoscopic things people study in the Langlands program? - "The handful of centralizers that are not Levis were classified by Kac." -- I would've thought to credit rather [Borel-de Siebenthal]. mathoverflow.net/questions/28878/… – Allen Knutson Jul 1 '11 at 22:15
2015-11-27 17:18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8466783761978149, "perplexity": 282.441539786938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449793.41/warc/CC-MAIN-20151124205409-00298-ip-10-71-132-137.ec2.internal.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=ind0403&L=LATEX-L&F=&S=&P=10264
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE #### View: Message: [ First | Previous | Next | Last ] By Topic: [ First | Previous | Next | Last ] By Author: [ First | Previous | Next | Last ] Font: Proportional Font Subject: Re: DocTeX -- the next generation? From: Date: Tue, 23 Mar 2004 15:44:03 +0100 Content-Type: text/plain Parts/Attachments: text/plain (143 lines) At 17.59 +0100 2004-03-21, David Kastrup wrote: >Hello, > >I have been musing about the best ways of providing people with >integrated help systems, input helps and so forth and so on. > >The usual TeX authoring system (like Emacs with AUCTeX, or kile, or >WinEdt, or even LyX) will provide menu entries for commands, >descriptions like the following: [snip] >and so forth and so on (not to mention syntax highlighting). Now the >usual way DocTeX files describe things are with \DescribeMacro{...}, ("DocTeX" --- that had me confused for quite a while. "doc.sty", "the doc package", or "the .dtx system" I would have understood, but "DocTeX" was definitely a new name for this.) >with examples of code, with the synposis of commands (using things >like \marg, \oarg and so in the descriptions). Interestingly enough, neither \marg nor \oarg are defined by doc.sty, but by ltxdoc.cls. >I would propose that the next iteration of DocTeX should try to >formalize most of the stuff into somewhat more rigid patterns. Richer markup, you mean? It could probably be done, and it would probably be a Good Thing. >It would appear that the material before \StopEventually{} would >usually, if just given a bit more formal markup, Hmm... "just a *bit* more formal markup". That might have to be quite a large bit, I'm afraid. The current amount of markup (\DescribeMacro and friend) in the description parts of .dtx files is little more than special commands for putting stuff into the index. OTOH small additions could give large gains. A more formalised environment for code examples could help a lot. >be quite sufficient >to let the following be generated: > >Pages fit for TeXinfo or similar systems (like the above example) >that can be accessed once the editing system knows what packages one >uses, by referring to the name of the defined commands. Exporting text from .dtx files may be more work than you think. Many "general purpose" LaTeX->Something converters choke on or get thoroughly confused by .dtx files because they don't support \catcode changes, whereas pretty much all markup in .dtx files relies on some particular \catcode change. >Examples typeset with something like >\begin{examplepreamble} >\documentclass[fleqn]{article} >\usepackage{amsmath} >\end{examplepreamble} >With the fleqn class option to article, the look will be as follows: >\begin{examplebody} >$$> E = mc^2 >$$ >\end{examplebody} >In the typeset version, the source text of the example body and the >result would be side by side (hello Frank, nice you managed doing >this sort of thing in TLC2), in the help text version, an >appropriately generated image will be placed. Any idea whatsoever on how that could be achieved in the typeset version? In general, I frankly don't think it is! (And Frank probably relied on duplicating code and a certain amount of hacking in the production of TLC2.) The best you can hope to automate is to have these examples written to separate files that can then be typeset separately. To that end, you should probably make use of docstrip and rather write something like % \begin{exampleshow} %<*example> \documentclass[fleqn]{article} \usepackage{amsmath} % % \end{exampleshow} % \begin{examplehide} %<*example> \begin{document} % \end{examplehide} % With the fleqn class option to article, the look will be as follows: % \begin{exampleshow} $$E = mc^2$$ % \end{exampleshow} % \begin{examplehide} \end{document} % % \end{examplehide} >If the next DocTeX format is enhanced like this, we will gain > >a) automatically generated help systems including examples and >graphics in HTML, TeXinfo and other editing-system friendly ways. >b) instructions sufficient for helping with the entry of commands and >arguments. >c) graphical examples and cut&paste code guaranteed to run. >d) producing TLC3 will just entail listing all the names of the .dtx >files to some program, and it will be able to gather all the rest >automatically. >e) a hyperlink into the complete program source documentation for more >info. > >Of course, for help texts and stuff like that, one should try to come >up with a nice scheme that would help adding translations into >different languages, too. > >Pipe dream? (d) definitely is. Some of the other stuff probably could be done, but it certainly isn't easy. Targeting PDF should be easier than .info or HTML, because (i) then one wouldn't have to invent a separate LaTeX parser and (ii) there is no need to worry about whether the text will survive the typographical downgrading it would mean to convert it to .info or HTML. >Maybe. But I think there is a case to be made to formalize quite a >bit more in the usage instructions part of DocTeX files, to a degree >where mechanical exploitation becomes feasible. You're probably welcome to develop something. Keep in mind though that you need some sort of idea how the processing you want to make should be accomplished before inventing markup for it. If you do work on this, you'll probably want to take a look at CTAN:macros/latex/contrib/xdoc/, which reimplements a lot of the inner workings of doc so that they are not so tied to particular user commands. Most of it is aimed at enriching the implementation part (the .dtx files I write are mostly implementation), but it should be useful also for writing new description part commands. In particular there is a small system for handling "strings" of arbitrary characters (not just those with catcode ordinary or letter); this could be useful when making anchor names for hyperlinking and such things. Lars Hellström
2019-09-21 16:19:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8282694220542908, "perplexity": 4203.063632587994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00204.warc.gz"}
http://people.bath.ac.uk/rm257/pde_seminar/autumn2011.html
## Autumn 2011 Date Speaker Title/Abstract 7 Oct Sophia Demoulini University of Cambridge Weak and yet weaker solutions of time-dependent nonlinear elasticity and viscoelasticity For three dimensional elasticity we discuss a global existence theorem of measure-valued solutions with polyconvex stored energy function, and weak-strong uniqueness (recovery of strong solutions). This in particular makes use of the polyconvex structure, namely the null Lagrangians. We will also see the use of relative entropy as a tool in the analysis of weak and measure-valued solutions. For viscoelasticity we discuss conditions for both global measure-valued and classical weak solutions, and conditions under which the recovery of the classical weak solution is guaranteed from a measure-valued solutions. 14 Oct Landscape seminar 21 Oct Florian Theil University of Warwick Periodic ground state in two and three dimensions We consider the asymptotic behavior of minimizers of pair interaction interaction energies $$E(y) = \sum_{1 \leq i < j \leq N} V(|y_i - y_j|)$$ as $N$ tends to infinity, where $y_i \in R^d$, $d =2$ or $d=3$. Intuitively one expects that for 'reasonable' potentials the limit will be a highly symmetric structure: a lattice. We construct open sets of potentials in $C^2$ for which it can be shown rigorously that the the minimizers converge to a triangular lattice in two dimensions and a face-centered cubic lattice in three dimensions. The proof relies on several new results in discrete geometry. Those results are established with the support of a computer program. 28 Oct Landscape seminar 4 Nov CNRS – Paris 6 Travelling fronts for time heterogeneous Fisher-KPP equations 11 Nov Landscape seminar 18 Nov Aram Karakhanyan University of Edinburgh Regularity of free boundary for a stationary heat transfer problem In this talk I will discuss the Stefan problem with given constant convection. The objective is to study the optimal regularity of the solution and the free boundary for both one phase and two phase problem. 25 Nov Landscape seminar 2 Dec Ali Taheri University of Sussex Energy minimisers on elastic annuli, generalised twists, SO(n) and the Spinor groups 9 Dec Landscape seminar 16 Dec Simon Blatt University of Warwick Analysis of O'Hara's knot energies Abstract 12 Jan Hannes Uecker University of Oldenburg Approximating the dynamics of active cells in a diffusive medium by ODEs – Homogenization with Localization joint work with J. Müller, TU München Bacteria may change their behavior depending on the population density. Here we study a dynamical model in which cells of radius $R$ within a diffusive medium communicate with each other via diffusion of a signalling substance produced by the cells. The model consists of an initial boundary value problem for a parabolic PDE describing the exterior concentration $u$ of the signalling substance, coupled with $N$ ODEs for the masses $a_i$ of the substance within each cell. We show that for small $R$ the model can be approximated by a hierarchy of models, namely first a system of $N$ coupled delay ODEs, and in a second step by $N$ coupled ODEs. We give some illustrations of the dynamics of the approximate model. Note: Despite this being a Thurday, this seminar takes place at 4.15 in 4W 1.7.
2013-05-24 15:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6217490434646606, "perplexity": 1135.208534433451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00049-ip-10-60-113-184.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/550511/what-is-the-best-package-for-accessibility-tagging
# What is the best package for accessibility tagging? The PDF files I create from LaTeX fail any accessibility tests, as the final document is not tagged in any way. I know I can add tags directly to a PDF document with Adobe Acrobat; also with a tagging/accessibility package for LaTeX. But I'm wondering which package reflects best the state of the art, and is the easiest to use? I don't want to have to rewrite too much of several hundred pages of documents. There seem to be several competing packages available, but I don't want to have to put a lot of work into using one to find that it becomes inadequate after a while. • ConTeXt. Although, there are some efforts for LaTeX as well, see tagpdf. Jun 22, 2020 at 5:48 • Yes, thanks - I've used ConTeXt, but in the end I had to go back to LaTeX as there were some things in LaTeX not (at that stage) in ConTeXt. Also - I don't want to have to rewrite all my files into a new format! Jun 22, 2020 at 6:41 • Even if there is a solution that doesn't require moving to ConTeXt, you'll still have to rewrite large parts of your file, because accessibility has quite some special requirements on how the content is input. It requires lots of manual annotation. Jun 23, 2020 at 3:06 • Accessibility is not only tagging. It is also about correct metadata. For this requirement I think hyperxmp is the most recent and sophisticated. In terms of PDF/A, conformance «a» (but not PDF/UA) also colour management is a requirement. Right now, the necessary OutputIntent is best included manually, see an example. Since tagpdfis rather labour intensive you may consider the Acrobat approach. Jun 23, 2020 at 9:15 • Thank you all: I'm still a bit confused - should I move to ConTeXt (I can translate LaTeX to ConTeXt using pandoc), or LuaLaTeX, or both? I've just taken a leading role in my University's accessibility management, and clearly I would like my own documents to be models of good practice. I can also use HTML and MathJax, but my preference is a single PDF document. Jun 25, 2020 at 0:31 The only publicitely available LaTeX-package that currently actually works is as far as I know tagpdf [Disclaimer: I'm the author]: It's documentation isn't perfectly tagged but good enough to pass the PAC3 test. But tagpdf hasn't been written as a user package but to allow experiments and tests and to help to identify missing interfaces in the kernel and in packages. It can change at any time in incompatible ways and it requires some skills to use it. The package is part of a project to add accessibility support to LaTeX. I wrote an article about the general plan in Arstechnica (https://www.guitex.org/home/images/ArsTeXnica/AT028/fischer.pdf). The article has also been reprinted in the newest tugboat (https://www.tug.org/TUGboat/tb41-1/tb127fischer-accessible.pdf). Side-remark: You tagged your question with pdftex. With pdftex tagging of running text is much more difficult than with luatex. • Thank you - I've looked at tagpdf, but it is said to be "not ready for production". And you make that point yourself above. It looks like I might need to learn to use LuaTeX, about which I currently know nothing whatsoever. Jun 25, 2020 at 0:27 You may see some older discussion of the accessibility package. accessibility was developed and published back in 2007 as a proof of concept for some of the KOMA document styles. I got hold of the files from the author in 2019 and took over maintenance with her permission. I tidied up the package enough to get it to CTAN, but didn't update the functionality. I also published it to GitHub to get some feedback on it. It seems to have worked well in 2007 for a few test cases. Unfortunately it now fails every test case, and it looks like needing some serious efforts to fix. Because of this I no longer think that accessibility is fit for purpose. As you've seen I've recently asked CTAN to add a disclaimer to the catalog entry for it. The Github code repository is also tagged as "not ready for production". So, don't try it unless you are curious! If anyone reading has coding skills and would like to contribute to the package, please leave an issue there (or, just fix it...). There's another issue here. Tagging and accessibility is deeply linked to the semantic structure of the document. There's a need to separate the the visible structure in the output PDF (that which a sighted person would recognise from heading numbers and good use of visual clues) from the document content (visible as the tagging tree in adobe, for example) so that machine readers can use it. Ideally this would be done within the core latex code so that a solution is usable by every package, not as a sticking-plaster on top where every change to a package breaks it. This need for a broadly applicable solution is why the approach being pursued by @Ulrike and the core team is much more likely to lead to a sustainable solution. Also there's stuff to be rescued from accessibility, there's some great ideas in tagpdf, and the whole thing has to not break the rest of 'tex. Not much to ask! So.. currently I'm thinking on ways to get some resources for this and will be coordinating with the team. All suggestions are welcome!
2022-10-05 04:56:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44820988178253174, "perplexity": 1134.7986744617244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00664.warc.gz"}
https://www.steakunderwater.com/wesuckless/viewtopic.php?f=19&t=475&p=3963&amp
## FuInTool Script font so small you can barely read it [FIXED] SecondMan Posts: 3367 Joined: Thu Jul 31, 2014 5:31 pm Been thanked: 65 times Contact: ### InTool Script font so small you can barely read it  [FIXED] Fusion version: Fusion 8 beta 1 OS and version: Windows 7 Ultimate Additional relevant system info: Description of the bug: The InTool Script font is too small, much smaller than anything else in the interface. Severity (Trivial, Minor, Major, Critical) Minor Steps to reproduce: Please, if possible, provide a Fusion setup to help demonstrate the behaviour, either as an attachment or between Code: Select all tags:[/b][/color] [attachment=0]InToolScript_TooSmall.png[/attachment] You do not have the required permissions to view the files attached to this post. Fusionator Posts: 1381 Joined: Fri Aug 08, 2014 1:11 pm Been thanked: 8 times ### Re: InTool Script font so small you can barely read it Can you scale it up with ctrl-mousewheel? SecondMan
2019-06-26 20:45:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.699863851070404, "perplexity": 9944.242507255967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000545.97/warc/CC-MAIN-20190626194744-20190626220744-00047.warc.gz"}
https://transum.org/Maths/Exam/Online_Exercise.asp?NaCu=87
# Exam-Style Questions. ## Problems adapted from questions set for previous Mathematics exams. ### 1. GCSE Higher The Billing triplets are planting seedlings on the first day of the month. The three of them take two hours to plant 300 seedlings. (a) On the second day of the month the triplets are joined by their friend Billy who helps them. Working at the same rate, how many plants should the four of them be able to plant in two hours? (b) Working at the same rate, how much longer would it take four people to plant 1000 seedlings than it would take five people? (c) Billy says that it took two hours for three people to plant 300 seedlings. If I assume they work all day, then in one day three people will plant 3600 seedlings because 300 × 12 = 3600. Why is Billy's assumption not reasonable? What effect has Billy's assumption had on his answer? ### 2. GCSE Higher A Primary school asked 9 children to remove the weeds from the school gardens. The gardens measured 100m2 in total and the job was completed in 10 hours (spread over many weeks). (a) The school was asked to find volunteers to weed another garden measuring 250m2 at a nursing home. This work has to be completed in 15 hours. Calculate the least number of children the school needs to find for this work. (b) State one assumption you have made in your answer to part (a). How would your answer to part (a) change if you did not make this assumption? ### 3. GCSE Higher People are paid to paint polygons. There are eight people in the polygon painting posse. All of the people paint at the same rate. When all of the people are painting, they can paint all of the polygons in the palace in ten days. The table shows the number of people painting each day: Day 1 Day 2 Day 4 All other days Number of people painting 3 6 7 8 Work out the total number of days taken to paint all of the polygons in the palace. ### 4. GCSE Higher Which of the following statements are correct if $$xy = c$$ and $$c$$ is a constant. • (a) $$y$$ is directly proportional to $$x$$ • (b) $$y$$ is directly proportional to $$\frac{1}{x}$$ • (c) $$y$$ is inversely proportional to $$\frac{1}{x}$$ • (d) $$x$$ is directly proportional to $$y$$ ### 5. GCSE Higher If $$a$$ is inversely proportional to $$b$$ and $$b$$ is directly proportional to $$c^2$$ find a formula for $$a$$ in terms of $$c$$ given that $$a=20$$ and $$c = 4$$ when $$b = 8$$. If you would like space on the right of the question to write out the solution try this Thinning Feature. It will collapse the text into the left half of your screen but large diagrams will remain unchanged. The exam-style questions appearing on this site are based on those set in previous examinations (or sample assessment papers for future examinations) by the major examination boards. The wording, diagrams and figures used in these questions have been changed from the originals so that students can have fresh, relevant problem solving practice even if they have previously worked through the related exam paper. The solutions to the questions on this website are only available to those who have a Transum Subscription. Exam-Style Questions Main Page Search for exam-style questions containing a particular word or phrase: To search the entire Transum website use the search box in the grey area below.
2022-06-29 08:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41186147928237915, "perplexity": 1055.097237171769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00553.warc.gz"}
https://www.biostars.org/p/135663/
how to generate setID file? 1 0 Entering edit mode 6.4 years ago la.sy • 0 i want to generate setID file for SKAT(r package) so i use "annovar" but i don't know how to use annovar after i read guide could anyone provide me some suggestion annovar • 2.2k views 0 Entering edit mode what exactly are you not understanding ? The annovar documentation is quite straight forward. Alternatively, try wANNOVAR 0 Entering edit mode 10 weeks ago I know this is for a long time, but I found till now there is no clear answer so to use annovar firstly you download annovar from this link https://annovar.openbioinformatics.org/en/latest/ and then in your Linux unzip the file tar xvfz annovar.latest.tar.gz then module load annovar then run this annotate_variation.pl -out ex1 -build hg19 example/ex1.avinput humandb/ as an example to understand what is happening here, all the examples files will be downloaded automatically marwa
2021-08-02 03:00:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17138947546482086, "perplexity": 8975.81847419084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00391.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2009-October/041470.html
# [OS X TeX] Per-folder project root Chris Goedde cgg.lists at gmail.com Mon Oct 12 14:16:00 EDT 2009 ```On Oct 12, 2009, at 12:21 PM, Herbert Schulz wrote: > Howdy, > > I guess I don't really understand what you're trying to accomplish. > What do you mean by a ``project root folder''? It almost sound like > you have a particular file that contains definitions that are used > by the other files. If that is the case why not just make the file > containing the definitions generic (no documentclass, etc.) and just > \input it into the other files. A bit more sophisticated is to > create your own package, put it somewhere in ~/Library/texmf/tex/ > latex/ (where ~ is you HOME directory) and then just \usepackage in > each file to include the definitions. That way each document is > completely separate. something different. In regards to Themis' suggestion to use the %!TEX syntax, I'd forgotten about that. I guess I don't like to clutter up my files with front-end specific information. I'd rather do things through the front end itself (which is why I use the "Set Project Root" menu item). I'd just like to be able to set the project root for every file in a given folder with a single setting. Chris ```
2020-09-27 19:26:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8260247707366943, "perplexity": 5845.119403030994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00538.warc.gz"}
https://math.stackexchange.com/questions/2560294/orientations-and-the-de-rham-cohomology
# Orientations and the de Rham cohomology Anybody could help me with this exercise, please? If $M$ is a compact, connected, orientable and smooth $n$-manifold: 1) Show that there is a one-to-one correspondence between orientations of $M$ and orientations of the vector space of its de Rham cohomology, under which the cohomology class of a smooth orientation form is an oriented basis for $H_{dR}(M)$. 2) Now suppose $M$ and $N$ are smooth $n$-manifolds with given orientations. Show that a diffeomorphism $F\colon M \rightarrow N$ is orientation preserving if and only if the pullback between their rham cohomologies is orientation preserving. • What do you know / have you tried already? Helping is hard without knowing where to start. – T'x Dec 10 '17 at 18:01 • Actually, I am lost, I don't know how to start. – Irene Gil Dec 10 '17 at 18:04 • Do you know the fact that an orientation on $M$ is equivalent to a nowhere vanishing $n$-form on $M$? – T'x Dec 10 '17 at 18:07 • Yes, I do, but how can I relate this fact with the cohomology? – Irene Gil Dec 10 '17 at 18:10 • $H^n(M)$ consists of equivalence classes of $n$-forms. – T'x Dec 10 '17 at 18:16
2019-05-26 09:42:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7793900966644287, "perplexity": 218.91951302178936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00317.warc.gz"}
https://brilliant.org/problems/poly-trolly/
# Poly Trolly Geometry Level 2 A polygon of $$n$$ sides has 275 diagonals then the number of sides is: ×
2018-09-22 21:41:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5526173710823059, "perplexity": 5625.932919328072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158691.56/warc/CC-MAIN-20180922201637-20180922222037-00239.warc.gz"}
https://stats.stackexchange.com/questions/321517/from-discrete-choice-to-conjoint-utilities
# From Discrete choice to Conjoint utilities I'm having trouble figuring out how to correctly calculate conjoint part-worth-utility from a discrete choice experiment. I have recently run a pilot study to analyse in R where I used the packages “support.CEs” and “survival” and I’m trying to figure out how to calculate conjoint part-worth-utilities from the coefficients, but not sure I’m doing it correctly. For simplicity sake I will use a simplified example about rice to explain my problem. The experiment contains the following attributes and levels. Region = c("RegA", "RegB", "RegC"), Cultivation = c("Conv", "NoChem", "Organic"), Price = c("1700", "2000", "2300")), I then used the clogit() function to analyse the results of the experiment using the following model: RES ~ ASC + RegB + RegC + NoChem + Organic + Price + strata(STR) This gave the following result: coef exp(coef) se(coef) z p ASC 4,443 85,035 0,483 9,199 0,00E+00 RegB 0,469 1,599 0,137 3,417 6,30E-04 RegC 0,968 2,632 0,108 8,996 0,00E+00 NoChem 0,752 2,120 0,177 4,257 2,10E-05 Organic 1,165 3,205 0,141 8,252 1,10E-16 Price -0,002 0,998 0,000 -9,732 0,00E+00 Now my assumption is that the part-worth-utilities for each attribute level is simply the corresponding coefficients, with the remaining level not part of the model (RegA for the Region attribute) being 0? However, while looking at another R package called conjoint, devised to analyse rating based conjoint, I noticed that they determined the last levels value in a different way: # Example 1 library(conjoint) data(herbata) ul<-caUtilities(hpref,hprof,hlevn) print(ul) To find the remaining levels utility they take the sum of all other levels in the attribute and subtract it from 0, meaning RegA would have an utility of -1,437 instead of 0. 0 - (0,469 + 0,968) = -1,437 I can't quite figure out the logic behind this as it seems to considerably widen the distance in utility between RegA and RegB/RegC while keeping the distance between RegA and RegC fixed. At first I dismissed this as being a mistake, however I then saw that the example for the ChoiceModelR package also calculated the remaining attribute levels as the inverse sum of the rest. Is this truly the correct way to determine the utility of the remaining attribute level? Secondary question: If so, how is the result of say the “marginal willingness to pay” function mwtp() reliable, when it would sets RegA as 0?
2021-03-06 02:09:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7637666463851929, "perplexity": 2322.2508677193937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00336.warc.gz"}
https://rdrr.io/cran/QBAsyDist/man/SemiQRegAND.html
# SemiQRegAND: Semiparametric quantile regression in quantile-based... In QBAsyDist: Asymmetric Distributions and Quantile Estimation ## Description The local polynomial technique is used to estimate location and scale function of the quantile-based asymmetric normal distribution discussed in Gijbels et al. (2019c). The semiparametric quantile estimation technique is used to estimate βth conditional quantile function in quantile-based asymmetric normal distributional setting discussed in Gijbels et al. (2019b) and Gijbels et al. (2019c). ## Usage 1 2 3 4 5 locpolAND_x0(x, y, p1 = 1, p2 = 1, h, alpha = 0.5, x0, tol = 1e-08) locpolAND(x, y, p1, p2, h, alpha, m = 101) SemiQRegAND(beta, x, y, p1 = 1, p2 = 1, h, alpha = NULL, m = 101) ## Arguments x This a conditioning covariate. y The is a response variable. p1 This is the order of the Taylor expansion for the location function (i.e.,μ(X)) in local polynomial fitting technique. The default value is 1. p2 This is the order of the Taylor expansion for the log of scale function (i.e., \ln[φ(X)]) in local polynomial fitting technique. The default value is 1. h This is the bandwidth parameter h. alpha This is the index parameter α of the quantile-based asymmetric normal density. The default value is 0.5 in the codes code locpolAND_x0 and code locpolAND. The default value of α is NULL in the code SemiQRegAND. In this case, α will be estimated based on the residuals from local linear mean regression. x0 This is a grid-point x_0 at which the function is to be estimated. tol the desired accuracy. See details in optimize. m This is the number of grid points at which the functions are to be evaluated. The default value is 101. beta This is a specific probability for estimating βth quantile function. ## Value The code locpolAND_x0 provides the realized value of the local maximum likelihood estimator of \widehat{θ}_{rj}(x_0) for (r\in \{1,2\}; j=1,2,...,p_r) with the estimated approximate asymptotic bias and variance at the grind point x_0 discussed in Gijbels et al. (2019c). The code locpolAND provides the realized value of the local maximum likelihood estimator of \widehat{θ}_{r0}(x_0) for (r\in \{1,2\}) with the estimated approximate asymptotic bias and variance at all m grind points x_0 discussed in Gijbels et al. (2019c). The code SemiQRegAND provides the realized value of the βth conditional quantile estimator by using semiparametric quantile regression technique discussed in Gijbels et al. (2019b) and Gijbels et al. (2019c). ## References Gijbels, I., Karim, R. and Verhasselt, A. (2019b). Quantile estimation in a generalized asymmetric distributional setting. To appear in Springer Proceedings in Mathematics & Statistics, Proceedings of ‘SMSA 2019’, the 14th Workshop on Stochastic Models, Statistics and their Application, Dresden, Germany, in March 6–8, 2019. Editors: Ansgar Steland, Ewaryst Rafajlowicz, Ostap Okhrin. Gijbels, I., Karim, R. and Verhasselt, A. (2019c). Semiparametric quantile regression using quantile-based asymmetric family of densities. Manuscript. ## Examples 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 data(LocomotorPerfor) x=log(LocomotorPerfor$Body_Mass) y=log(LocomotorPerfor$MRRS) h_ROT = 0.9030372 locpolAND_x0(x, y, p1=1,p2=1,h=h_ROT,alpha=0.50,x0=median(x)) data(LocomotorPerfor) x=log(LocomotorPerfor$Body_Mass) y=log(LocomotorPerfor$MRRS) h_ROT = 0.9030372 locpolAND(x, y, p1=1,p2=1,h=h_ROT, alpha=0.50) # Data data(LocomotorPerfor) x=log(LocomotorPerfor$Body_Mass) y=log(LocomotorPerfor$MRRS) h_ROT = 0.9030372 gridPoints=101 alpha= 0.5937 plot(x,y) # location and scale functions estimation at the grid point x0 gridPoints=101 fit_AND <-locpolAND(x, y, p1=1,p2=1,h=h_ROT, alpha=alpha, m = gridPoints) par(mgp=c(2,.4,0),mar=c(5,4,4,1)+0.01) # For phi plot plot(fit_AND$x0,exp(fit_AND$theta_20),ylab=expression(widehat(phi)(x[0])), xlab="log(Body mass)",type="l",font.lab=2,cex.lab=1.5, bty="l",cex.axis=1.5,lwd =3) ## For theta2 plot plot(fit_AND$x0,fit_AND$theta_20,ylab=expression(bold(widehat(theta[2]))(x[0])), xlab="log(Body mass)",type="l",col=c(1), lty=1, font.lab=1,cex.lab=1.5, bty="l",cex.axis=1.3,lwd =3) par(mgp=c(2.5, 1, 0),mar=c(5,4,4,1)+0.01) # X11(width=7, height=7) plot(x,y, ylim=c(0,4.5),xlab = "log(Body mass (kg))", ylab = "log(Maximum relative running speed)",font.lab=1.5, cex.lab=1.5,bty="l",pch=20,cex.axis=1.5) lines(fit_AND$x0,fit_AND$theta_10, type='l',col=c(4),lty=6,lwd =3) lines(fit_AND$x0,SemiQRegAND(beta=0.50,x, y, p1=1,p2=1, h=h_ROT,alpha=alpha,m=gridPoints)$fit_beta_AND, type='l',col=c(1),lty=5,lwd =3) lines(fit_AND$x0,SemiQRegAND(beta=0.90,x, y, p1=1,p2=1, h=h_ROT,alpha=alpha,m=gridPoints)$fit_beta_AND,type='l',col=c(14),lty=4,lwd =3) lines(fit_AND$x0,SemiQRegAND(beta=0.10,x, y, p1=1,p2=1, h=h_ROT,alpha=alpha,m=gridPoints)$fit_beta_AND,type='l', col=c(19),lty=2,lwd =3) legend("topright", legend = c(expression(beta==0.10), expression(beta==0.50), expression(beta==0.5937), expression(beta==0.90)), col = c(19,1,4,14), lty=c(2,5,6,4), adj = c(.07, 0.5),, inset = c(0.05, +0.01), lwd = 3,cex=1.2) QBAsyDist documentation built on Sept. 4, 2019, 1:05 a.m.
2022-05-27 15:56:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6011162996292114, "perplexity": 4143.115705607316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00081.warc.gz"}
http://math.stackexchange.com/questions/284166/given-the-sde-dx-t-db-tbx-t-dt-with-x-bx-leq-0-forall-x-in-mathb
# Given the SDE: $dX_t=dB_t+b(X_t) dt$ with $(x,b(x)) \leq 0, \forall x \in \mathbb{R}^n$, prove that $E[|X_t|^2] \leq nt+E[|X_0|^2]$ I'm working on this problem: Given a solution $X_t$ to the SDE $$dX_t=dB_t+b(X_t) dt$$ where $B_t$ is an $n$-dimensional Brownian motion, and $b:\mathbb{R}^n \to \mathbb{R}^n$ a Lipschitz continuous function satisfying $$(x,b(x)) \leq 0, \forall x \in \mathbb{R}^n$$ prove that $E[|X_t|^2] \leq nt+E[|X_0|^2]$. ($E[ \cdot ]$ is the expected value over the probability space, $| \cdot |$ is the Euclidean norm in $\mathbb{R^n}$) This is what I got to this point: first writing the SDE by components, $$X_t^i=X_0^i+\int_0^t {dB}_t+\int_0^t b^i(X_s^i) ds$$ calculating, using $B_0=0$, $$X_t^i-\int_0^t b^i(X_s^i) ds = X_0^i+B_t^i$$ squaring both sides and taking the expected value, using $E[B_t^i]=0, E[(B_t^i)^2]=t$ $$E[(X_t^i)^2]-2E[X_t^i \int_0^t b^i(X_s^i) ds]+E[(\int_0^t b^i(X_s^i) ds)^2]=E[(X_0^i)^2]+t$$ summing over all component $1 \leq i \leq n$, $$E[|X_t|^2]=E[|X_0|^2]+nt+2E[\sum_{i=1}^n X_t^i \int_0^t b^i(X_s^i) ds]-\sum_{i=1}^n E[(\int_0^t b^i(X_s^i) ds)^2]$$ the last term is clearly $\leq 0$ and as is, poses no problem. So I'm left with proving: $$E[\sum_{i=1}^n X_t^i \int_0^t b^i(X_s^i) ds] \leq 0$$ Does this really hold? Any help with this, or another proof of the problem altogether would be highly appreciated. First, notice that $b(X_t^{(i)})$ makes no sense. It should be $b(X_t)$. We apply Ito's lemma with $f(x) = \|x\|^2$: $$f(X_t) - f(X_0) = M_t + 2\sum_{i=1}^n\int_0^tX_s^{(i)}b^{(i)}(X_s)ds + \sum_{i=1}^n\int_0^tds$$ where $$M_t = 2\sum_{i=1}^n \int_0^tX_s^{(i)}dB^{(i)}_s$$ is a continuous local martingale. Let $\tau_k \uparrow +\infty$ be a sequence of stopping times such that $M_{t\wedge \tau_k}$ is a martingale. Using the hypothesis, we have $$E(f(X_{t \wedge \tau_k})) = E(f(X_0)) + 2E\left(\int_0^{t\wedge \tau_k} (X_s,b(X_s))ds\right) + n E(t\wedge \tau_k) \leq E(f(X_0)) + nt.$$ Finally, Fatou's lemma gives, as $k \to \infty$ $$E(\|X_t\|^2) \leq \liminf_{k\to\infty}E(f(X_{t\wedge \tau_k})) \leq E(\|X_0\|^2) + nt$$ My apologies. I was copying from my notes and there I'm just writing $i$'s instead of $(i)$'s (and also the "$b(X_t^{(i)})$" thing too, typing too fast and carelessly.) Thank you very much for the solution. –  DancefloorTsunderella Jan 22 '13 at 14:59
2014-08-23 20:10:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857127070426941, "perplexity": 217.03757230302787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826679.55/warc/CC-MAIN-20140820021346-00448-ip-10-180-136-8.ec2.internal.warc.gz"}
https://greprepclub.com/forum/what-is-the-area-of-the-quadrilateral-shown-above-5139.html
It is currently 18 Mar 2019, 15:26 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History What is the area of the quadrilateral shown above? Author Message TAGS: Moderator Joined: 18 Apr 2015 Posts: 5811 Followers: 93 Kudos [?]: 1138 [0], given: 5418 What is the area of the quadrilateral shown above? [#permalink]  07 Jun 2017, 01:52 Expert's post 00:00 Question Stats: 86% (01:03) correct 13% (01:34) wrong based on 36 sessions Attachment: #GREpracticequestion What is the area of the quadrilateral shown above .jpg [ 6.78 KiB | Viewed 891 times ] What is the area of the quadrilateral shown above? A) $$2 \sqrt{3}$$ B) $$3 \sqrt{3}$$ C) $$6$$ D) $$6 \sqrt{3}$$ E) $$8$$ [Reveal] Spoiler: OA _________________ Intern Joined: 02 Jan 2018 Posts: 3 Followers: 0 Kudos [?]: 3 [1] , given: 1 Re: What is the area of the quadrilateral shown above? [#permalink]  04 Jan 2018, 21:13 1 KUDOS Draw a perpendicular forming two right angle triangle, now the figure has two right angle triangle and one rectangle.Both the triangles have same angle so the area will be same. By using pythagoras theorem find the height of the triangle i.e sqrt(3), now find the area of both the triangle and a rectangle. sqrt(3)/2+sqrt(3)/2+2*sqrt(3) = 3sqrt(3) Intern Joined: 21 Dec 2017 Posts: 1 Followers: 0 Kudos [?]: 1 [1] , given: 1 Re: What is the area of the quadrilateral shown above? [#permalink]  11 Jan 2018, 00:51 1 KUDOS Consider three equilateral triangles within quadrilateral,each with side measuring 2.Now Area for Equilateral triangle is (sqrt(3)/4)*(side)^2 and multiply it by 3 to get area for three equilateral triangle i.e whole quadrilateral. Manager Joined: 02 Jan 2018 Posts: 66 Followers: 0 Kudos [?]: 30 [0], given: 0 Re: What is the area of the quadrilateral shown above? [#permalink]  23 Jan 2018, 10:45 Moderator Joined: 18 Apr 2015 Posts: 5811 Followers: 93 Kudos [?]: 1138 [0], given: 5418 Re: What is the area of the quadrilateral shown above? [#permalink]  23 Jan 2018, 15:28 Expert's post Added the OA. It is B. Regards _________________ Director Joined: 09 Nov 2018 Posts: 509 Followers: 0 Kudos [?]: 21 [0], given: 1 Re: What is the area of the quadrilateral shown above? [#permalink]  20 Jan 2019, 16:27 mayurwaghela wrote: By using pythagoras theorem find the height of the triangle i.e sqrt(3), How? Director Joined: 09 Nov 2018 Posts: 509 Followers: 0 Kudos [?]: 21 [0], given: 1 Re: What is the area of the quadrilateral shown above? [#permalink]  20 Jan 2019, 16:30 Can we think it as a trapezium? Intern Joined: 02 Jan 2019 Posts: 13 Followers: 0 Kudos [?]: 2 [0], given: 9 Re: What is the area of the quadrilateral shown above? [#permalink]  23 Jan 2019, 07:43 Yes it can be considered as a trapezoid, since the two angles alpha are the same and are connected to the longer of the two bases. Area: 0.5(Base 1 + Base 2) * height. How do we get the height? The shorter base (length 2) must have its center where the longer base has its center due to the fact that both angles are equal. Thus, we derive that the longer base just extends the shorter base by (4-2 = 2). Split equally on each side, we can see the longer base composition of lengths 1 + 2 + 1. If we look at the left part of the figure we have the upward sloping line with length 2. If we let fall a perpendicular from the connection of the upward sloping line and its vertex with the shorter base, we arrive exactly at the first part of the 1 + 2 + 1 composition of the longer base. Thus we have a created triangle with a base 1 and hypotenuse 2. Since we also know that it has a 90-degree angle, we can deduct it must be a 30 - 60 - 90 triangle. (Remainder: Side lengths of a 30 - 60 - 90 triangle are 1:2:sqrt(3). Thus the height of the triangle is sqrt(3) which equals the height of the trapezoid. Plug in the formula. Re: What is the area of the quadrilateral shown above?   [#permalink] 23 Jan 2019, 07:43 Display posts from previous: Sort by
2019-03-18 23:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4506186246871948, "perplexity": 2174.724092004832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201812.2/warc/CC-MAIN-20190318232014-20190319014014-00306.warc.gz"}
https://www.physicsforums.com/threads/find-the-equation-of-an-ellipse.870790/
# Find the equation of an ellipse ## Homework Statement Hello, my friend asked my If I could help him with this problem. However I just can't seem to find a way to solve this. Ellipse Focus(2,2) vertex(2,-6) Point(26/5,2) a+e=8 find the equation of the ellipse ## Homework Equations (x-m)^2/a^2+(y-n)^2/b^2=1 Center(m,n) a=moyor axis b=minor axis ## The Attempt at a Solution I will post pictures of my work as I don't yet know how to use math syntax on the internet. I would really appreciate if you could find my mistake. I tried duing everything, however I can't seem to get the point(26/5,2) to be on my ellipse. Everything else looks fine The order of which I did the problem is represented by roman numbers So what I tried doing was: From the definition I know that the sum of the distance from each of the two fucoses to the outer line is constant and it equals 2a. We are also told that a+e=8. From the picture we can clearly see that the distance from the upper tocus to the vertex equals a+e( or 8). And we also know that 2a=8+e solving both equation we get that a=16/3 and e=8/3. Then I used the pitagoras therom to get b( the minor axis) and from then on I used the point (2,-6) in the ellipse equasion to get the center( later I realized that I could just substract e from the first focus). I got the final equation, however when I go to check it with the point (26/5,2) I don't get the right answer. Below I will include the picture of the ellipse I made using the desmos graphing calculator . as you can see the point does not lay on the line. I would really appreciate it if you could check my work and find my mistake. Homework Helper Gold Member I didn't check your calculations, but do you know for certain that the ellipse is in a horizontal or vertical orientation, i.e. that there is no xy term in your general form that first needs to be removed with a rotation of axes? Homework Helper Gold Member I didn't check your calculations, but do you know for certain that the ellipse is in a horizontal or vertical orientation, i.e. that there is no xy term in your general form that first needs to be removed with a rotation of axes? In fact, I think the answer to my question is that it is not rotated, because the focus is at (2,2) and a vertex at (2,-6). I'll need to check the algebra further... It appears it is in a vertical orientation. ...editing... some calculations I did give a=b=8 as one solution, making it a circle, with c=0. I will need to check my algebra. I would be a little surprised if this is the case. In any case, this problem appears to be somewhat challenging...editing some more... the info given of the focus position, vertex position, along with a+e=8 results in a cubic equation for "b" and one solution was b=8, for focus (2,2) being the top or bottom focus, and in both cases, the other solutions were complex/imaginary. The resulting circle does not pass through the point (26/5,2). Thereby the result I obtained is that the info given in the original problem is inconsistent. (I'm using eccentricity e=c/b and c^2=b^2-a^2. Even though it's been a while since I studied the ellipse in detail, I think I got this part correct.) Last edited: Matejxx1 haruspex Homework Helper Gold Member I haven't managed to figure out how you obtain 2a=8+e. Matejxx1 Homework Helper Gold Member Matejxx1 I would also like to add, that I noticed you guys use c for the distance between the focus and the center of the ellipse. I was unclear my mistake. We denote e as the distance between the focus and the center. So basically our e is your c I didn't check your calculations, but do you know for certain that the ellipse is in a horizontal or vertical orientation, i.e. that there is no xy term in your general form that first needs to be removed with a rotation of axes? This is a highschool problem and they have not done any kind of problem where the ellipse's axis are parallel to x and y and as you said the focus and the vertex are both on x=2 The resulting circle does not pass through the point (26/5,2). Thereby the result I obtained is that the info given in the original problem is inconsistent. (I'm using eccentricity e=c/b and c^2=b^2-a^2. Even though it's been a while since I studied the ellipse in detail, I think I got this part correct.) I got many different answers as well. Once I got that it's just a lie with a=8 and b =0 I haven't managed to figure out how you obtain 2a=8+e. Like I mentioned at the top. I was a little unclear. What I mean by e is the distance between the focus and the center ( basically your c) and the distance from |GT1|=8 and |G2T1|=a-e so the distance from both focuses to the point T1 is 8+a-e and this equals 2a. Aha . . . I see it know will try to do the problem again Ok so now i spend quite a bit if time on this problem. I did it from the start again and this is what I got: then I used wolfram alpha to solve this equation and got this only 1 soultion was real so I used that one to calculate both of the axis. However once again I got an ellipse but the point (26/5,2) was not on it. Do you guys think it's possible that the instruction are wrong because I'm completely lost at this point Last edited: haruspex Homework Helper Gold Member Ok so now i spend quite a bit if time on this problem. I did it from the start again and this is what I got: View attachment 100447 then I used wolfram alpha to solve this equation and got this View attachment 100448 only 1 soultion was real so I used that one to calculate both of the axis. However once again I got an ellipse but the point (26/5,2) was on it. Do you guys think it's possible that the instruction are wrong because I'm completely lost at this point I assume you meant to write that the point was not on it. Without using the location of the point on it, you do not have enough information to find the equation, so I do not understand what you did. Matejxx1 haruspex
2022-07-02 11:13:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8185377717018127, "perplexity": 331.37164169860705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00247.warc.gz"}
https://math.stackexchange.com/questions/1677464/solving-differentialequation-y-ytanx4sinx
# Solving differentialequation $y'=y*tan(x)+4sin(x)$ So I'm pretty new to differential equations and I currently struggle with this one. First things first. I know it's an first order linear differential equation with an undertermined coefficient. To solve it I have to find the particulate and the uniform (is this the right term?) solutions. The general solution is then $y = y_h + y_p$ $y'=y\cdot \tan(x)+4\sin(x)$ 1. uniform solution $y_h'=y_h\cdot \tan(x)$ $\int\frac{y_h'}{y_h} dx = \int \tan(x) dx$ $\int\frac{1}{y_h} dy = \int \tan(x) dx$ $\ln|y_h| = -\ln(\cos(x))+c$ $\ln|y_h| = \ln(cos^{-1}(x))+c$ $|y_h| = e^c\cdot \frac{1}{\cos(x)}$ $y_h = C\cdot \frac{1}{\cos(x)}$ 1. particulate solution I use the uniform solution. $y_p = C(x)\cdot \frac{1}{\cos(x)}$ $y_p' = C'(x)\cdot \frac{1}{\cos(x)}+C(x)\cdot \tan(x)\cdot 1/\cos(x)$ $y_p' = C'(x)\cdot \frac{1}{\cos(x)}+y_h\cdot \tan(x)$ Back into y': $C'(x)\cdot \frac{1}{\cos(x)}+ y\cdot \tan(x) = y\cdot \tan(x)+4\sin(x)$ $C'(x) = 4\sin(x)\cos(x)$ Integrating gives: $C(x) = -2\cos^2(x)$ Back into $y_p'$: $y_p = -2\cos^2(x) \cdot \frac{1}{\cos(x)}$ $y_p = -2\cos(x)$ Now everything back together. $y = y_h + y_p$ $y = C\cdot \frac{1}{\cos(x)} - 2\cos(x)$ I definitly got something wrong… I'm pretty sure that we learned it this way or I might messed something up. Thanks for the help, I tried solving it now for quite some time. • "Homogeneous" is the word, not uniform. – Bobson Dugnutt Feb 29 '16 at 19:50 • The step where you go from $y_p' = C'(x) / \cos(x) + C(x) tan(x) / cos(x)$ to $y_p' = C'(x) / \cos(x) + y_h tan(x)$ is wrong. The $C$ in the expressions for $y_p$ and $y_h$ have nothing to do with each other. For $y_h$, $C$ is a constant that will be determined by boundary/initial conditions. For $y_p$, $C(x)$ is something altogether different and also a function of $x$ (i.e., not constant). It is independent of the boundary/initial conditions and has nothing to do with $C$. It's a property of the particular solution you seek. Maybe you should call it something else to avoid confusion. – nukeguy Feb 29 '16 at 19:54 • I don(t understand where is your final problem: $y = C\cdot \frac{1}{\cos(x)} - 2\cos(x)$ is solution of $y'=y\cdot \tan(x)+4\sin(x)$ and indeed its general solution. – Jean Marie Feb 29 '16 at 20:02 • @Seen The statement in your calculus worksheet is poorly worded in my opinion. In any case, $c(x) f_h(x)$ is not a homogeneous solution unless $c(x)$ is constant. What you're trying to do here is not to find another homogeneous solution, but rather try to find a particular solution by guessing that it looks something like $c(x) f_h(x)$. You start off with $f_p(x) = c(x) f_h(x) = c(x)/cos(x)$, but then later on in your derivation you claim $f_h(x) = c(x)/cos(x)$. These two statements do not agree with each other. – nukeguy Mar 1 '16 at 15:51 • Actually, ignore the comment I posted earlier about your solution being wrong, I forgot a minus sign when I plugged it back in (I just deleted it so nobody else will get confused). The solution you have is actually correct -- as @JeanMarie pointed out, if you plug in $y= C / \cos(x) - 2 \cos(x)$ back into the differential equation, it is satisfied. Your definition of $C$ is just different from the $C$ in Jan Eerland's solution. As you noted, $\cos(2x) = 1+\cos^2(x)$. Try replacing $C$ with something like $C+1$ or $C-1$and you will see that your solutions are the same. – nukeguy Mar 1 '16 at 16:09 $$y'(x)=y(x)\tan(x)+4\sin(x)\Longleftrightarrow$$ $$y'(x)-y(x)\tan(x)=4\sin(x)\Longleftrightarrow$$ Let $\mu(x)=e^{\int-\tan(x)\space\text{d}x}=\cos(x)$; Multiply both sides by $\mu(x)$: $$y'(x)\cos(x)-y(x)\sin(x)=4\cos(x)\sin(x)\Longleftrightarrow$$ Substitute $-\sin(x)=\frac{\text{d}}{\text{d}x}\left(\cos(x)\right)$: $$y'(x)\cos(x)-y(x)\cdot\frac{\text{d}}{\text{d}x}\left(\cos(x)\right)=4\cos(x)\sin(x)\Longleftrightarrow$$ Apply the reverse product rule to the left-hand side: $$\frac{\text{d}}{\text{d}x}\left(y(x)\cos(x)\right)=4\cos(x)\sin(x)\Longleftrightarrow$$ $$\int\frac{\text{d}}{\text{d}x}\left(y(x)\cos(x)\right)\space\text{d}x=\int4\cos(x)\sin(x)\space\text{d}x\Longleftrightarrow$$ $$y(x)\cos(x)=-\cos(2x)+\text{C}\Longleftrightarrow$$ $$y(x)=\frac{-\cos(2x)+\text{C}}{\cos(x)}\Longleftrightarrow$$ $$y(x)=\text{C}\sec(x)-\cos(2x)\sec(x)$$ • Ok, so my answer is different than yours but could you elaborate on what I did wrong? – Seen Feb 29 '16 at 21:15 • Yes I can, but if you want to learn from your mistake find it yourself! – Jan Feb 29 '16 at 21:17 • So I used a different integral but $-2cos^2(x)$ is equivalent to $-cos(2x)-1$, so it actually should end up being the same. Thanks for the effort but could you know help me a little bit by elaborating? I'm still new to differential equations and I wasn't even sure if I did the scheme right or if I did error calculating. – Seen Feb 29 '16 at 21:31 • @JanEerland's technique is also different from yours. He does not try to solve for the homogeneous and particular parts of the solution separately. Instead, he uses an "integrating factor" technique, which works very well for 1st-order linear ordinary differential equations such as this one. (See: mathworld.wolfram.com/IntegratingFactor.html) – nukeguy Mar 1 '16 at 15:58
2019-12-14 02:43:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7113978266716003, "perplexity": 249.24235964626715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540579703.26/warc/CC-MAIN-20191214014220-20191214042220-00018.warc.gz"}
https://mathematica.stackexchange.com/questions/235767/what-is-keyboard-shortcut-combination-for-resourcefunction
# What is keyboard shortcut combination for ResourceFunction? I have seen in some notebooks that functions called with ResourceFunction["PairwiseScatterPlot"] appear with a red set of square brackets enclosing a red filled square. It would be great to know the shortcut instead of having to type out ResourceFunction every time. Any help is appreciated. • I don't think there is one. This is a display form, not a form one types in manually. Dec 2 '20 at 13:14 • Perhaps ResourceFunctionInput over at the function repository will do what you are asking: resources.wolframcloud.com/FunctionRepository/resources/… Dec 2 '20 at 14:27 • (+1) Mostly for the pointing out PairwiseScatterPlot. Dec 2 '20 at 14:28 • @JoshuaSchrier that’s it for sure, you should post it as an answer so that OP can accept it! Dec 9 '20 at 11:52 ResourceFunction["ResourceFunctionInput"]["InstallAlias"] permanently adds an interactive input for ResourceFunction symbols as input alias esc-rfi-esc that allows for inline creation of ResourceFunction symbols. There are also other options for adding menu bar items and other ways to use this functionality. • Not sure why but on my machine (Ubuntu 18.08 with MMA 12.0) this fails with the message SystemPacletUninstall::shdw: Symbol PacletUninstall appears in multiple contexts {System, PacletManager}; definitions in context System may shadow or be shadowed by other definitions. Anyone having the same experience? Jan 18 '21 at 17:37
2022-01-26 07:40:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.357421338558197, "perplexity": 2176.357102045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00577.warc.gz"}
https://byjus.com/question-answer/a-al3-cu2-na-zn2-ions-are-present-in-an-aqueous-solution-such-that-the/
Question # (a) Al3+, Cu2+, Na+, Zn2+ ions are present in an aqueous solution, such that the concentration of ions is same. write the order of discharge of ions. (b) Amongst the OH− ions and Br− ions which are likely to discharge first? Open in App Solution ## (a) The order of discharge of metallic ions (cations) depends on their position in the electrochemical series. If the concentration of the ions is equal, the cation located lower in the electrochemical series gets discharged first (as it easily gains electrons) at the respective electrodes, followed by the other cations in the series, in decreasing order. Since the order of reactivity of cations is Na+ >Al3+>Zn2+>Cu2+, thus, Cu2+ion being at the lowest position of the electrochemical series would get discharged first and Na+ ion, being at the top, gets discharged last. (b) The order of discharge of anions depends on the position of the anions in the electrochemical series. The anion located lower in the electrochemical series gets discharged first (as these easily lose electrons) at the respective electrodes, followed by the other anions in the series, in decreasing order. ${\mathrm{OH}}^{-}$ ion is placed below bromide ion in the series, hence it easily loses electrons and is likely to discharge earlier than bromide ion. Suggest Corrections 0
2023-01-29 23:17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164403438568115, "perplexity": 2306.4293397253427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00787.warc.gz"}
http://math.stackexchange.com/questions/281267/if-a-in-mathrmclos-does-it-follow-that-there-exists-a-sequence-of-points
# If $a\in \mathrm{clo}(S)$, does it follow that there exists a sequence of points in $S$ that converges to $a$? Let $X$ be a topological space and $S=\{x_n\}$ be a sequence of points in $X$. Suppose $a$ is a point in $X$ such that $a$ is adherent to $S$(that is $a$ is in the closure of $S$),I want to ask if there must exist a sequence $\{y_n\}$ in $S$ such that the limit of $\{y_n\}$ is $a$. If not,please give an example,Thanks! - What do you think? –  Did Jan 18 '13 at 8:14 You may want to check out the properties of the Arens-Fort space, which is not a first-countable space. –  Haskell Curry Jan 18 '13 at 8:23 There is a class of spaces, called Fréchet–Urysohn spaces, which are defined by having the property that $x \in \overline{A}$ iff there is a sequence in $A$ converging to $x$. Agustí Roig's answer indicates that all first-countable spaces are Fréchet–Urysohn. The following example shows that this is a strictly larger class of spaces. Example 1: Consider the quotient space $X = \mathbb{R} / \mathbb{N}$. (If you are unfamiliar with quotient spaces, let $X = ( \mathbb{R} \setminus \mathbb{N} ) \cup \{ * \}$, and topologise $X$ by declaring $U \subseteq X$ to be open iff $U \setminus \{ * \}$ is open in $\mathbb{R}$, and if $* \in U$, then for each $n \in \mathbb{N}$ there is a $\epsilon > 0$ such that $( n - \epsilon , n ) \cup ( n , n + \epsilon ) \subseteq U$.) Suppose that $\{ U_n : n \in \mathbb{N} \}$ is any countable family of open neighbourhoods of $*$, for each $n \in \mathbb{N}$ find $\epsilon_n > 0$ such that $( n - \epsilon_n , n ) \cup ( n , n + \epsilon_n ) \subseteq U_n$. Define $$V = \{ * \} \cup \bigcup_{n \in \mathbb{N}} \left( ( n - \frac{\epsilon_n}2 , n ) \cup ( n , n - \frac{\epsilon_n}2 ) \right).$$ Then $V$ is an open neighbourhood of $*$, but $U_n \not\subseteq V$ for all $n$. Therefore $X$ is not first-countable. To show that $X$ is Fréchet–Urysohn, let $A \subseteq X$, and $x \in \overline{A}$. • If $x \neq *$, then letting $A_0 = A \setminus \{ * \}$ it follows that $x \in \mathrm{cl}_\mathbb{R} ( A_0 )$ (the closure of $A_0$ in $\mathbb{R}$ with the usual topology). As $\mathbb{R}$ is Fréchet–Urysohn there is a sequence in $A_0$ converging to $x$ in the topology of $\mathbb{R}$, and this same sequence can be shown to converge to $x$ in the topology of $X$. • If $x = *$, the without loss of generality assume that $* \notin A$. I claim that there is a $k \in \mathbb{N}$ such that $k \in \mathrm{cl}_\mathbb{R} ( A )$. If not, then for each $k \in \mathbb{N}$ there is an $\epsilon_k > 0$ such that $( k - \epsilon_k , k + \epsilon_k ) \cap A = \emptyset$ (note that we can take $\epsilon_k < 1$). Then $$V = \{ * \} \cup \bigcup_{k \in \mathbb{N}} \left( ( k - \epsilon_k , k ) \cup ( k , k + \epsilon_k ) \right)$$ is an open neighbourhood of $*$ disjoint from $A$, contradicting that $* \in \overline{A}$! Thus there is a $k \in \mathbb{N}$ such that $k \in \mathrm{cl}_\mathbb{R} ( A )$. As $\mathbb{R}$ is Fréchet–Urysohn there is a sequence in $A$ converging to $k$. This same sequence can be shown to converge to $*$ in the space $X$. An example of a non-Fréchet–Urysohn space is as follows: Example 2: Consider $\mathbb{R}$ (or any uncountable set) with the co-countable topology: $U \subseteq \mathbb{R}$ is open iff either $U = \emptyset$ or $\mathbb{R} \setminus U$ is countable. Setting $A = \mathbb{R} \setminus \{ 0 \}$, note that $0 \in \overline{ A }$, however if $( x_i )_{i=1}^\infty$ is any sequence in $A$, then $\mathbb{R} \setminus \{ x_i : i \in \mathbb{N} \}$ is an open neighbourhood of $0$ containing no members of the sequence, and so the sequence cannot converge to $0$. The above example is somewhat lacking as $X$ fails to be Hausdorff. Example 3: Consider the ordinal space $X = \omega_1 + 1$ consisting of all ordinals $\leq \omega_1$ (the first uncountable ordinal) with the order topology. This space is easily seen to be Hausdorff, and by properties of ordinals it is also compact. It is easy to see that if $A = \omega_1 = [ 0 , \omega_1 )$, then $\omega_1 \in \overline{A}$. However if $( \xi_n )_{n=1}^\infty$ is any sequence in $A$, then there must be a countable ordinal $\alpha < \omega_1$ such that $\xi_n < \alpha$ for all $n$. Then $( \alpha , \omega_1 ]$ is an open neighbourhood of $\omega_1$ containing no members of the sequence, and so the sequence cannot converge to $\omega_1$. - Example 3 is good. –  Paul Jan 19 '13 at 1:36 Generally, for any topological space, the answer is "no". But if the space is a metric (or metrizable) one the answer is "yes". See for instance Munkress' "Topology. A first course", lemma 10.2 (chapter 2.10). But, in fact, as Munkress warns us, we don't need the full strength of the space being metrizable: it suffices that it satisfies the first countability axiom. See also theorem 1.1 in chapter 4.1: for a space $X$ satisfying the first countability axiom and $A \subset X$, $x\in \overline{A}$ if and only if there is a sequence of points of $A$ converging to $x$. For the notion of convergent sequence in an arbitrary topological space, see also the definition in chapter 2.10, taking into account that there might be surprises in non-Hausdorff spaces -like sequences converging to more than one point! - For another example, let $X$ be the Stone–Čech compactification of $\Bbb{N}$. It does not contain any subspace homeomorphic to $A(\aleph_0)$, i.e., in $\beta \mathbb{N}$ there are no non-trivial convergent sequences (see Engleking's book Corollary 3.6.15). The fact that there are no non-trivial convergent sequences in Stone-Čech compactification has been also mentioned (and proven) in Stone-Cech compactifications and limits of sequences. In this blog post proof for $\beta\mathbb N$ is given. –  Martin Sleziak Mar 14 '13 at 6:58
2014-04-18 16:15:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9853845834732056, "perplexity": 66.89285827113987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
https://euphonics.org/8-5-self-excited-vibration/
# 8.5 Self-excited vibration Musicians do not normally want their instruments to exhibit chaotic behaviour. Nevertheless the ideas of chaos and sensitive dependence do have some important manifestations in musical performance, and we will meet examples later. But our final topic for this chapter is far more obviously central to the business of making music. Some nonlinear systems can produce sustained vibration “out of nowhere”: self-excited vibration. The human voice is an example, and all the musical wind instruments: woodwind, brass, organ pipes and so on. All the bowed-string instruments are also examples, as are other instruments relying on friction to cause vibration. Friction drums, for example, have a rod attached to a drum membrane of some kind, and the player rubs the rod with rosin-coated fingers to produce a sound. The familiar singing effect produced by running a wet finger round the rim of a wineglass relies on similar behaviour. We will come to musical examples very shortly, but we will start this section with a non-musical example which allows us to make some links with the discussion in the previous sections, based around the phase plane representation. We will look briefly at something called the Van der Pol equation, originally proposed by a Dutch electrical engineer back in 1927 to describe the behaviour of certain electrical circuits based on valves (vacuum tubes). These circuits exhibited spontaneous oscillation: a phenomenon still crucial in many devices, such as the internal “clock” providing the timing signal inside every computer or mobile phone. Since the 1920s, Van der Pol’s equation has been found useful for modelling many other phenomena in physics, geophysics and biology. You can see Van der Pol’s equation, and some extra detail about its history and behaviour, in the next link. Essentially, the equation describes a linear oscillator modified by a nonlinear damping effect. The coefficient of this nonlinear term determines the details of the behaviour. If it is zero, the system is just a linear oscillator with a single singular point at the origin in the phase plane, which is a centre as expected for an undamped oscillator. But when this nonlinear coefficient is positive, that centre turns into an unstable spiral so that any trajectory starting near the origin spirals outwards. But it doesn’t continue outwards: instead, it tends to towards a closed loop in the phase plane, in other words a periodic solution or limit cycle. Furthermore, any trajectory starting outside that limit cycle spirals inwards towards it. The limit cycle is an attractor for (almost) every trajectory. SEE MORE DETAIL Figure 1 shows some examples, for different values of the nonlinear coefficient. In each case four trajectories are shown: two in red starting inside the limit cycle, and two in black starting outside it. The same four starting points are used in every case. The limit cycle and its “attractor” behaviour is very clear. The shape of the limit cycle changes with the value of the coefficient. What this means for the waveform of displacement of the oscillator is shown in Fig. 2, for the same four cases. When the coefficient is small, the limit cycle is almost an ellipse and the waveform is almost sinusoidal (after a starting transient). As the coefficient grows, the waveform changes shape. It gradually acquires sharp corners, which inevitably mean that the periodic solution has a significant mixture of higher harmonics in its Fourier series. The frequency of the self-excited vibration also changes, getting lower as the coefficient increases, as is clear in the lower row of plots in Fig. 2. We can now look at an example relating more directly to musical instruments. The discussion will be phrased in terms of a clarinet, but really the description to be given here would apply in general terms to any reed instrument: saxophone, oboe, reed organ pipe or whatever. Figure 3 shows a sketch of the mouthpiece end of a clarinet. The main tube of the instrument is an acoustic duct with internal resonances of the kind we met back in Section 4.2. The tube has tone-holes bored through it: the player can cover some or all of these using fingertips or key mechanisms, to modify the frequencies of the internal acoustic resonances. In a clarinet the tube is usually made of wood or plastic, in instruments like the saxophone it would be made of metal. These different material choices make very little difference to us here, because they all result in a tube that is essentially rigid. All the important action takes place in the air inside the tube. At the mouthpiece, a flexible reed is attached to the tube. The player puts this part in their mouth, and blows. What happens then is a matter of common empirical experience. If the player blows very gently, the only sound is a bit of rushing wind noise associated with the turbulent air flow through the mouthpiece into the tube. But if the player gradually increases the blowing pressure, at a certain point the instrument will “light up” and start to produce a musical note. If the blowing pressure is increased further, the tone quality of the note changes (and perhaps the pitch changes a little as well). The sound tends to get brighter with higher blowing pressure. But if the blowing pressure is increased too far, the instrument “chokes up” and the sound stops. This sequence of events can be understood in a simple way. We can suppose that the player simply provides a constant pressure in their mouth, called $p_0$ in Fig. 3. But just inside the mouthpiece there will be a time-varying pressure which we can call $p(t)$: this is the quantity we would like to understand, since it is responsible for the sound of the instrument. There is a second time-varying quantity we need to think about: this is the flow rate of air from the player’s mouth, through the mouthpiece and into the tube. We will call this $v(t)$, to describe the volume flow rate (in cubic metres per second if we wanted to put a number to it). The two quantities $p(t)$ and $v(t)$ are related to each other in two quite different ways, illustrated schematically as a block diagram in Fig. 4. First, they are connected via the linear acoustical behaviour of the tube. We can imagine a laboratory experiment in which the instrument was supplied with a sinusoidal flow of air at its mouthpiece end by an actuator, and the pressure response inside the mouthpiece was measured by a small microphone. By varying the frequency of the sine wave, the frequency response function of the tube could be measured. This particular frequency response, with volume flow as input and pressure as output, is called the input impedance of the clarinet. Such measurements of input impedance are indeed routinely made on wind instruments of all kinds: Fig. 5 shows one being carried out on a saxophone. The measurement is being done in an anechoic chamber, a special room with sound-absorbing walls to avoid complications from room acoustics. As with other linear response measurements we have seen earlier, the test need not necessarily be done using a sinusoidal signal. Any input can be used, provided it can be measured. The input and output signals can be converted into the frequency domain using an FFT routine in the computer, just as is done when structural measurements are made using an impulse hammer. A typical measured example of the input impedance of a clarinet is shown in Fig. 6. The peaks in the plot correspond to the resonances of the tube. The pattern of these peaks shows them with approximately harmonic spacing, but only for the odd harmonics 1,3,5… This is as we should expect from the discussion in section 4.2: the tube of a clarinet is approximately a uniform duct (not a tapered one like an oboe or a trumpet), and it is effectively open at the bell end but closed at the mouthpiece end. As shown in Fig. 12 of section 4.2, those conditions lead naturally to odd-harmonic resonance frequencies. The second relation between $p(t)$ and $v(t)$ involves the mouthpiece and reed, acting as a kind of nonlinear valve. For the purposes of this initial discussion, we will use a severely simplified approximation. We will take no account of the fact that the flexible reed has resonance frequencies its own, we will simply treat it as behaving like a spring. As a first step, we won’t even allow that much: we can think about how the air flow through the mouthpiece would behave if the reed were rigid. A pressure difference could be applied across this rigid mouthpiece, and the resulting air flow rate could be measured. What we would expect to see, disregarding any subtleties of fluid mechanics such as vortices, would be something like the dashed line in Fig. 7. The bigger the pressure difference, the bigger the air flow. If the air flow is dominated by viscous resistance through the small gap, the behaviour would just be linear, as sketched. The dashed line is sloping downwards rather than upwards because of the sign convention we have used: a positive value of the pressure difference $p(t)-p_0$ corresponds to sucking the mouthpiece, not blowing it, so we would expect $v(t)$ to be negative. For very small values of the pressure difference, the actual behaviour of the mouthpiece with a flexible reed should be rather like the dashed line. But as the player tries to blow more air through, making $p(t)-p_0$ more and more negative, the reed will be pressed inwards. As a result, the flow rate $v(t)$ will be less than the dashed line would suggest. The result will be something like the solid line in Fig. 7: the further we move towards the left, the more the reed is closed and the more the air flow is restricted. Eventually, in an idealised situation, the reed will close completely against the rigid part of the mouthpiece (called the “lay”). There would be no air flow at all, indicated by the horizontal portion of the curve. If the player applies a low mouth pressure while the clarinet is not making any sound, that would correspond to shifting to a position on the curve like the red dot in Fig. 8. Now suppose there is a little bit of pressure variation (i.e. sound) inside the tube. This will change the pressure difference a little, in the vicinity of the red dot: the air flow will then vary in a way that follows the tangent to the curve at that point, shown in the red line. Such variations have a simple physical interpretation. The tangent line is down-sloping, similar to the dashed line in Fig. 7. But we know what that line describes: it is the response associated with a viscous resistance, and it involves energy dissipation. But now suppose the player blows a little harder, so that the operating point on the curve shifts to a position like the one shown in Fig. 9. Because we have gone past the peak in the curve, the tangent line now has an upward slope. That would correspond to a negative viscous resistance, and small fluctuations in pressure inside the mouthpiece will result in energy being gained, not lost. (Of course, energy has not been created from nowhere: this energy gain is actually supplied by the player’s lungs doing a little more work.) We can now make an intuitive leap, and guess what might happen. The tube of the clarinet is an acoustic duct, with resonances. Each of these resonances will, of course, involve some energy dissipation. Energy can be lost in several different ways: some is “lost” because it is carried away by sound waves radiating from the instrument, while some is lost within the tube, mainly by viscous and thermal interaction with the walls. It seems plausible that if for some particular resonance these losses can be compensated by the “negative resistance” effect associated with the mouthpiece, as indicated in Fig. 9, that resonance might become unstable. We saw just such an effect with the Van der Pol equation, in the plots of Fig. 1. The equilibrium position became unstable, leading to growing oscillations, settling down after a while into a periodic limit cycle. This is exactly what happens with our simplified model of a clarinet, and the result is the behaviour we described earlier, very familiar from real clarinets. There is a threshold of blowing pressure, at which the instrument “lights up” and starts to make sound. The phenomenon is completely dependent on the nonlinear behaviour of the mouthpiece with its flexible reed. There are two different approaches we can use to test whether we have guessed correctly how the clarinet model will behave. The more mathematical of the two involves applying the method of harmonic balance (introduced in section 8.2.2) to the situation of Fig. 9. The details are described in the next link: the conclusion is that there is indeed a threshold of blowing pressure when a self-excited periodic oscillation becomes possible. The prediction of this analysis is that the frequency will correspond to the highest peak in the input impedance, and we can see in Fig. 6 that this is the fundamental resonance of the clarinet tube, as we might have expected. SEE MORE DETAIL The alternative approach is to turn directly to numerical simulation, and use the computer to explore how the model behaves. A particularly efficient way to do this involves formulating the behaviour of the linear part of the clarinet in a slightly different way: not based on a frequency response function like the input impedance, but describing the acoustic response of the duct in terms of travelling waves. This approach was first developed in the context of the vibration of a bowed string, and we will explore that application in the next chapter. But the clarinet model gives a simple way to introduce the method. A pleasing name has been given to the approach by Julius Smith [1]: he calls it the “digital waveguide” method.
2021-10-18 04:06:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5599223971366882, "perplexity": 454.0775158377028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00381.warc.gz"}
https://bookdown.org/sbtseiji/lswjamoviJ/section-186.html
# 引用文献 Adair, G. 1984. “The Hawthorne Effect: A Reconsideration of the Methodological Artifact.” Journal of Applied Psychology 69: 334–45. Agresti, A. 1996. An Introduction to Categorical Data Analysis. Hoboken, New Jersey: Wiley. ———. 2002. Categorical Data Analysis. 2nd ed. Hoboken, New Jersey: Wiley. Akaike, H. 1974. “A New Look at the Statistical Model Identification.” IEEE Transactions on Automatic Control 19: 716–23. Anscombe, F. J. 1973. “Graphs in Statistical Analysis.” American Statistician 27: 17–21. Bickel, P. J., E. A. Hammel, and O’ConnellJ. W. 1975. “Sex Bias in Graduate Admissions: Data from Berkeley.” Science 187: 398–404. Box, George E. P. 1976. “Science and Statistics.” Journal of the American Statistical Association 71: 791–99. Box, J. F. 1987. “Guinness, Gosset, Fisher, and Small Samples.” Statistical Science 2: 45–52. Brown, M. B., and A. B. Forsythe. 1974. “Robust Tests for Equality of Variances.” Journal of the American Statistical Association 69: 364–67. Campbell, D. T., and J. C. Stanley. 1963. Experimental and Quasi-Experimental Designs for Research. Boston, MA: Houghton Mifflin. Cochran, W. G. 1954. “The $$\chi^2$$ Test of Goodness of Fit.” The Annals of Mathematical Statistics 23: 315–45. Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Lawrence Erlbaum. Cramer, H. 1946. Mathematical Methods of Statistics. Princeton: Princeton University Press. Dunn, O.J. 1961. “Multiple Comparisons Among Means.” Journal of the American Statistical Association 56: 52–64. Ellis, P. D. 2010. The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge, UK: Cambridge University Press. Ellman, Michael. 2002. “Soviet Repression Statistics: Some Comments.” Europe-Asia Studies 54 (7): 1151–72. Evans, J. St. B. T., J. L. Barston, and P. Pollard. 1983. “On the Conflict Between Logic and Belief in Syllogistic Reasoning.” Memory and Cognition 11: 295–306. Evans, M., N. Hastings, and B. Peacock. 2011. Statistical Distributions (3rd Ed). New York, NY: Wiley. Fisher, R. A. 1922a. “On the Interpretation of $$\chi^2$$ from Contingency Tables, and the Calculation of $$p$$.” Journal of the Royal Statistical Society 84: 87–94. ———. 1922b. “On the Mathematical Foundation of Theoretical Statistics.” Philosophical Transactions of the Royal Society A 222: 309–68. ———. 1925. Statistical Methods for Research Workers. Edinburgh, UK: Oliver & Boyd. Fox, J., and S. Weisberg. 2011. An R Companion to Applied Regression. 2nd ed. Los Angeles: Sage. Gelman, Andrew, and Eric Loken. 2014. “The statistical crisis in science.” American Scientist 102 (6): 460+. https://doi.org/10.1511/2014.111.460. Gelman, A., and H. Stern. 2006. “The Difference Between ‘Significant’ and ‘Not Significant’ Is Not Itself Statistically Significant.” The American Statistician 60: 328–31. Hays, W. L. 1994. Statistics. 5th ed. Fort Worth, TX: Harcourt Brace. Hedges, L. V. 1981. “Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators.” Journal of Educational Statistics 6: 107–28. Hedges, L. V., and I. Olkin. 1985. Statistical Methods for Meta-Analysis. New York: Academic Press. Hogg, R. V., J. V. McKean, and A. T. Craig. 2005. Introduction to Mathematical Statistics. 6th ed. Upper Saddle River, NJ: Pearson. Holm, S. 1979. “A Simple Sequentially Rejective Multiple Test Procedure.” Scandinavian Journal of Statistics 6: 65–70. Hothersall, D. 2004. History of Psychology. McGraw-Hill. Hsu, J. C. 1996. Multiple Comparisons: Theory and Methods. London, UK: Chapman; Hall. Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8): 697–701. Jeffreys, Harold. 1961. The Theory of Probability. 3rd ed. Oxford. Johnson, Valen E. 2013. “Revised Standards for Statistical Evidence.” Proceedings of the National Academy of Sciences, no. 48: 19313–7. Kahneman, D., and A. Tversky. 1973. “On the Psychology of Prediction.” Psychological Review 80: 237–51. Kass, Robert E., and Adrian E. Raftery. 1995. “Bayes Factors.” Journal of the American Statistical Association 90: 773–95. Keynes, John Maynard. 1923. A Tract on Monetary Reform. London: Macmillan; Company. Kruschke, J. K. 2011. Doing Bayesian Data Analysis: A Tutorial with R and BUGS. Burlington, MA: Academic Press. Kruskal, W. H., and W. A. Wallis. 1952. “Use of Ranks in One-Criterion Variance Analysis.” Journal of the American Statistical Association 47: 583–621. Kühberger, A, A Fritz, and T. Scherndl. 2014. “Publication Bias in Psychology: A Diagnosis Based on the Correlation Between Effect Size and Sample Size.” Public Library of Science One 9: 1–8. Larntz, K. 1978. “Small-Sample Comparisons of Exact Levels for Chi-Squared Goodness-of-Fit Statistics.” Journal of the American Statistical Association 73: 253–63. Lee, Michael D, and Eric-Jan Wagenmakers. 2014. Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press. Lehmann, Erich L. 2011. Fisher, Neyman, and the Creation of Classical Statistics. Springer. Levene, H. 1960. “Robust Tests for Equality of Variances.” In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, edited by I. Olkin et al, 278–92. Palo Alto, CA: Stanford University Press. McGrath, R. E., and G. J. Meyer. 2006. “When Effect Sizes Disagree: The Case of $$r$$ and $$d$$.” Psychological Methods 11: 386–401. McNemar, Q. 1947. “Note on the Sampling Error of the Difference Between Correlated Proportions or Percentages.” Psychometrika 12: 153–57. Meehl, P. H. 1967. “Theory Testing in Psychology and Physics: A Methodological Paradox.” Philosophy of Science 34: 103–15. Pearson, K. 1900. “On the Criterion That a Given System of Deviations from the Probable in the Case of a Correlated System of Variables Is Such That It Can Be Reasonably Supposed to Have Arisen from Random Sampling.” Philosophical Magazine 50: 157–75. Pfungst, O. 1911. Clever Hans (the Horse of Mr. Von Osten): A Contribution to Experimental Animal and Human Psychology. Translated by C. L. Rahn. New York: Henry Holt. Rosenthal, R. 1966. Experimenter Effects in Behavioral Research. New York: Appleton. Sahai, H., and M. I. Ageel. 2000. The Analysis of Variance: Fixed, Random and Mixed Models. Boston: Birkhauser. Shaffer, J. P. 1995. “Multiple Hypothesis Testing.” Annual Review of Psychology 46: 561–84. Shapiro, S. S., and M. B. Wilk. 1965. “An Analysis of Variance Test for Normality (Complete Samples).” Biometrika 52: 591–611. Sokal, R. R., and F. J. Rohlf. 1994. Biometry: The Principles and Practice of Statistics in Biological Research. 3rd ed. New York: Freeman. Stevens, S. S. 1946. “On the Theory of Scales of Measurement.” Science 103: 677–80. Stigler, S. M. 1986. The History of Statistics. Cambridge, MA: Harvard University Press. Student, A. 1908. “The Probable Error of a Mean.” Biometrika 6: 1–2. Welch, B. L. 1947. “The Generalization of ‘Student’s’ Problem When Several Different Population Variances Are Involved.” Biometrika 34: 28–35. Wilkinson, Leland, D Wills, D Rope, A Norton, and R Dubbs. 2006. The Grammar of Graphics. Springer. Yates, F. 1934. “Contingency Tables Involving Small Numbers and the $$\chi^2$$ Test.” Supplement to the Journal of the Royal Statistical Society 1: 217–35.
2021-12-04 11:06:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8123021125793457, "perplexity": 11397.167024992556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00147.warc.gz"}
https://physics.stackexchange.com/questions/595193/why-liquids-evaporate-completely-when-heated-at-their-boiling-points
# Why liquids evaporate completely when heated at their boiling points? I'm confused about why the boiling of liquids happen. The boiling point of a liquid is defined as the temperature in which it's vapor pressure is equal to the pressure of the gas above the liquid( for example the atmospheric pressure ), and vapor pressure is defined as the (temperature dependent) partial pressure of the gas state of the liquid in which the rate of condensation is equal to the rate of evaporation. So for example, let say we have a beaker of water at sea level( 1 atm of atmospheric pressure) and we heat it constantly( for example with a bunsen burner ) to 100ºC, since at that temperature the vapor pressure of water is equal to the atmospheric pressure of water, we have heated it to it's boiling point. What I don't understand is why after some time all beaker's water will became a gas and no liquid water will be left in the beaker, because the bubbles of water gas that form in the beaker will rise( due to lower density ), but once the reach the level of water they should constantly condense and evaporate at the same rate( due to the vapor pressure ) so the level of water should remain constant, because the amount of water that leaves as a gas is the same that condenses to the liquid, but we know that after some time all water will evaporate and diffuse into the atmosphere, so no liquid water will be left in the beaker. I'm confused about this, why does the liquid completely evaporates when according to the vapor pressure, gas and liquid should be in equilibrium and some water should remain liquid in the beaker? What am I missing?. Thank you for your help, I haven't really found any explanation to this in the internet or in a textbook. • @Chet Miller: You are right. I was not thinking clearly because I was worried wwhat would happen the to any air. If there is only water and water vapour, and one holds at fixed $T$, then it is the volume that determines the fractions of water and vapour. Nov 20, 2020 at 18:16
2022-05-28 21:04:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7564493417739868, "perplexity": 310.4235112981523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00733.warc.gz"}
https://portlandpress.com/bioscirep/article/40/12/BSR20201482/227089/Survival-prediction-in-patients-with-colon
## Abstract The present study proposed a deep learning (DL) algorithm to predict survival in patients with colon adenocarcinoma (COAD) based on multiomics integration. The survival-sensitive model was constructed using an autoencoder for DL implementation based on The Cancer Genome Atlas (TCGA) data of patients with COAD. The autoencoder framework was compared with PCA, NMF, t-SNE, and univariable Cox-PH model for identifying survival-related features. The prognostic robustness of the inferred survival risk groups was validated using three independent confirmation cohorts. Differential expression analysis, Pearson’s correlation analysis, construction of miRNA–target gene network, and function enrichment analysis were performed. Two risk groups with significant survival differences were identified in TCGA set using the autoencoder-based model (log-rank P-value = 5.51e−07). The autoencoder framework showed superior performance compared with PCA, NMF, t-SNE, and the univariable Cox-PH model based on the C-index, log-rank P-value, and Brier score. The robustness of the classification model was successfully verified in three independent validation sets. There were 1271 differentially expressed genes, 10 differentially expressed miRNAs, and 12 hypermethylated genes between the survival risk groups. Among these, miR-133b and its target genes (GNB4, PTPRZ1, RUNX1T1, EPHA7, GPM6A, BICC1, and ADAMTS5) were used to construct a network. These genes were significantly enriched in ECM–receptor interaction, focal adhesion, PI3K–Akt signaling pathway, and glucose metabolism-related pathways. The risk subgroups obtained through a multiomics data integration pipeline using the DL algorithm had good robustness. miR-133b and its target genes could be potential diagnostic markers. The results would assist in elucidating the possible pathogenesis of COAD. ## Introduction Colorectal cancer (CRC) is the fourth most prevalent cancer and the second primary cause of cancer-related death in the United States. [1]. Due to improvements in cancer prevention, screening-based diagnosis, treatment modalities, and other factors, the incidence and mortality rate of CRC have significantly decreased [2]. Nonetheless, prognosis remains poor for patients with advanced colon cancer [3], and 90% of these patients have colon adenocarcinoma (COAD) [4]. Therefore, it is of great practical significance to improve the prognosis of patients with COAD by effective prognostic stratification. Multiomics data integration provides more information on tumorigenesis and development than 1D omics data and delivers additional benefits for precision medicine [5]. Deep learning (DL) allows the processing of high-dimensional data with numerous features and can use its activation function to utilize complicated nonlinear patterns [6]. An autoencoder, a DL algorithm, can reconstruct original input data to produce new features to represent the dataset. The application of DL algorithms is in its infancy for developing prognostic models. For instance, DL-based multiomics integration is robust in predicting the survival of patients with hepatocellular carcinoma [7]. Moreover, autoencoder-based multiomics integration has been used to identify survival-specific subtypes in patients with high-risk neuroblastoma [8] and bladder cancer [9]. However, further studies are needed to predict the survival rate of patients with COAD by integrating multiomics data through DL. In the present study, we applied a DL computational framework based on multiomics data (mRNA data, miRNA data, CpG methylation data, and clinical information) from The Cancer Genome Atlas (TCGA) and built a prognostic model based on new features transformed by an autoencoder to stratify patients with COAD. The stratification identified two survival subgroups with significantly different survival rates, which were further successfully validated in three independent datasets. Functional analysis of the two survival subgroups uncovered critical miRNAs, target genes, and signaling pathways in the biology of COAD. The robust classification of patients with COAD using this model may be beneficial for prognosis prediction and the development of precision medicine. ## Methods ### Datasets and preprocessing We obtained paired RNA sequencing (RNA-seq) data (RNA-seq Illumina HiSeq platform), miRNA sequencing (miRNA-seq) data (miRNA-seq IlluminaHiseq platform), and DNA methylation data (Methylation Illumina 450k platform) of 288 COAD samples with corresponding clinical information from TCGA database as the training set. For preprocessing of raw data, we first removed the probes or genes with values missing in more than 50% of samples. Methylation data were annotated using the RIlluminaHumanMethylation 450kanno.ilmn12.hg19 package [10]. Beta values of several DNA methylation sites in the promoter region were averaged to be the mean promoter methylation value. The samples were deleted if more than 20% of the features were missing. The missing values were filled out using the impute package (https://www.bioconductor.org/packages/release/bioc/html/impute.html) of R. Eventually, input features with zero values across all samples were removed. E-GEOD-17538 with 232 samples (A-AFFY-44, RNA-seq), E-GEOD-39582 with 558 samples (A-AFFY-44, RNA-seq), and E-GEOD-28722 with 125 samples (A-GEOD-13425, RNA-seq) were downloaded from the ArrayExpress database (https://www.ebi.ac.uk/arrayexpress/). The three datasets were used as validation sets. Clinical characteristics of TCGA set and the three validation sets are summarized in Table 1. Furthermore, the detailed clinical information of samples in TCGA, E-GEOD-17538, E-GEOD-39582, and E-GEOD-28722 datasets, is shown in Supplementary Tables S1–S4, respectively. Table 1 Clinical features of patients in TCGA dataset and three confirmation cohorts VariableTCGA set (N=288)E-GEOD-17538 (N=232)E-GEOD-39582 (N=558)E-GEOD-28722 (N=125) Gender (male/female) 157/131 122/110 308/250 – Age (years, mean ± SD) 65.46 ± 13.26 64.73 ± 13.43 66.81 ± 13.32 65.33 ± 12.95 OS (years, mean ± SD) 2.61 ± 2.42 3.95 ± 2.56 – 5.39 ± 3.53 OS status (alive/dead) 219/69 139/93 – 55/70 DFS (years, mean ± SD) 203/62 3.65 ± 2.86 4.06 ± 3.37 4.98 ± 3.76 DFS status (0/1) 2.40 ± 2.31 145/55 380/177 92/33 Tumor stage (I/II/III/IV) 44/112/82/40 28/72/76/56 32/261/201/60 23/64/31/5 VariableTCGA set (N=288)E-GEOD-17538 (N=232)E-GEOD-39582 (N=558)E-GEOD-28722 (N=125) Gender (male/female) 157/131 122/110 308/250 – Age (years, mean ± SD) 65.46 ± 13.26 64.73 ± 13.43 66.81 ± 13.32 65.33 ± 12.95 OS (years, mean ± SD) 2.61 ± 2.42 3.95 ± 2.56 – 5.39 ± 3.53 OS status (alive/dead) 219/69 139/93 – 55/70 DFS (years, mean ± SD) 203/62 3.65 ± 2.86 4.06 ± 3.37 4.98 ± 3.76 DFS status (0/1) 2.40 ± 2.31 145/55 380/177 92/33 Tumor stage (I/II/III/IV) 44/112/82/40 28/72/76/56 32/261/201/60 23/64/31/5 Abbreviations: DFS, disease-free survival; OS, overall survival; TCGA, The Cancer Genome Atlas. ### Feature transformation We used multiomics data from TCGA set as the input for the autoencoder, a DL framework. As shown in Figure 1 [7], the three matrices were first unit norm-scaled by sample and then stacked to be a unique matrix. We applied tanh as the activation function for each layer. To train the autoencoder, we employed a gradient descent algorithm with 10 epochs and 50% dropout. With two hidden layers (550 and 1100 nodes, respectively), the autoencoder was implemented using the Python Keras library (https://github.com/fchollet/keras). By using the bottleneck layer of the autoencoder model, 275 transformed features were produced from the multiomics data. #### Overall study design Figure 1 Overall study design (A) Autoencoder framework. (B) Construction and validation of the SVM model and further functional analysis. Figure 1 Overall study design (A) Autoencoder framework. (B) Construction and validation of the SVM model and further functional analysis. It should be noted that: (1) this self-coding has no output noise setting, and the loss function is described below, and (2) this self-encoder is fully connected and self-coding. The detailed principle of self-coding is as follows: Suppose n-dimensional features are input: x = (x1,.......,xn), and the purpose of self-coding is to reshape X′ by output X′ through a continuous hidden layer. Given a layer, we used tanh as the activation function to connect the input X of each layer and the output y of each layer, which are shown as follows: $y=fi(x)=tanh(Wi⋅xbi)$, where the sizes of X and Y are D and P, respectively, and Wi is the weight matrix of size p × d. In the k layer, X′ is defined as: $x′=F1→k(x)=f1o⋯ofk−1ofk(x)$ is the composite function of fk-1 and fk(x). To train self-coding, we aimed at different weight vectors Wi._ We selected logloss as the objective function, which measures the error between input x and output x′: $logloss(x,x′)=∑k=1dxklog(x′k)(1−xk)log(1−x′k)$ To prevent overfitting, we used the weight vector Wi plus L1 regularized penalty term αw, and in the active node, $F1→k(x)$ plus L2 regularization penalty term αa. Therefore, the objective function is defined as follows: $L(x,x′)=logloss(x,x′)∑i=1k(αwWi1αaF1→k(x)22)$ ### Univariable cox regression analysis of transformed features and K-means clustering For each transformed feature, a univariable Cox proportional hazards (Cox-PH) model was constructed. The feature with a log-rank P-value <0.05 was considered significant. We clustered the samples of TCGA set using the K-means clustering algorithm in the nbclust package (https://cran.r-project.org/web/packages/NbClust/index.html) of R. Silhouette index [11], and the Calinski–Harabasz criterion [12] was used to select the optimal number of clusters. The nbcluster function of the R nbclust package was used to calculate the most clustered data when k-mer was 2-6. In fact, we selected the Silhouette index and Calinski–Harabasz criterion for the evaluation index. Finally, NbClust can obtain the highest clustering number according to the calculation results. The detailed values of the Silhouette index and Calinski–Harabasz criterion are shown in Supplementary Table S5. Following obtaining labels from K-means clustering, survival of different risk subgroups was compared using Kaplan–Meier survival curves and log-rank t-test. Log-rank P-value [13], C-index [14], and Brier score [15] were calculated to assess the accuracy of survival prediction in the identified risk subgroups. ### Comparative analysis of DL framework with principal component analysis, nonnegative matrix factorization, and t-distributed stochastic neighbor embedding The DL framework was compared with other dimensionality reduction techniques, including principal component analysis (PCA) [16], nonnegative matrix factorization (NMF) [17], and t-distributed stochastic neighbor embedding (t-SNE) [18] for performance. For each method, 275 transformed features were used as features in the bottleneck layer of the DL framework. Using the same procedures mentioned above, the 275 transformed features underwent univariable Cox-PH model analysis, followed by K-means clustering of TCGA samples. Moreover, the autoencoder based on three omics datasets was compared with the univariable Cox-PH model. Specifically, univariable Cox-PH analysis was conducted for all three omics datasets of TCGA set. The top 13 features were selected according to C-index score and were used to cluster the samples in TCGA set following the aforementioned K-means procedure (Figure 1). ### Data partitioning and robustness evaluation Using the same cross-validation (CV)-like procedure described in a previous study [7], we randomly split the samples of TCGA dataset into five folds using the caret package of R, among which three folds were used as the training set and the other two folds were used as the test set. Consequently, 10 new combinations (folds) were acquired. For each new combination, the training set (60% of samples) was used to construct a model, which was then verified in the test set (40% samples). The robustness of the model was evaluated by calculating the log-rank P-value, C-index, and Brier score. ### Supervised classification Following K-means clustering analysis, we performed analysis of variance (ANOVA) [19] with each omics dataset from TCGA. The top N features significantly associated with the labels of risk groups were identified based on ANOVA F-values. Default N values were set to 40 for RNA, 30 for methylation, and 30 for miRNA. The log-rank-P of this parameter was significant in both the training and validation sets, and the c-index was high. The top 40 mRNAs and 30 methylation or 30 miRNA features identified by ANOVA were utilized to construct an SVM classifier, respectively, for predicting TCGA test data. The prediction accuracy of the SVM classification model was assessed using the log-rank P-value, C-index, and Brier score. The penalize SVM package of R was employed to carry out a grid search for the optimal combination of hyperparameters of the SVM model using five-fold CV and to develop SVM models. ### Confirmation using three independent validation sets Three independent confirmation sets (E-GEOD-17538, E-GEOD-28722, and E-GEOD-39582), all of which contained RNA-seq data, were used for validation of the two survival risk subgroups. First, we selected common mRNA features between each validation set and TCGA set, respectively, which were further subjected to median scale normalization and robust scale normalization. After the two scaling steps, the corresponding top 40 mRNA features selected by ANOVA were identified to construct an SVM classifier. ### Bioinformatic analysis Using TCGA data, we performed differential expression analysis in each individual omics layer between two survival risk groups identified by the autoencoder. The DESeq2 package [20] (https://bioconductor.org/packages/release/bioc/html/DESeq2.html) of R was used to identify differentially expressed miRNAs and genes with |log2FC| >1 and FDR <0.05 as the selection cutoff. A moderate t-test using the limma package (https://bioconductor.org/packages/release/bioc/html/limma.html) in R was used to determine significant differences in methylation with |beta difference| >0.1 and FDR < 0.05 as the strict threshold. In order to investigate whether DNA methylation affects gene expression, associations between methylation level and gene expression were evaluated by performing Pearson’s correlation analysis. A Pearson correlation coefficient < -0.5 and P-value <0.0001 were defined as significant differences. To study the regulatory relationships among differentially expressed mRNAs and miRNAs, potential target genes of the identified differentially expressed miRNAs were predicted using miRDB [21] (prediction score > 80) and TargetScan databases (probability of conserved targeting > 0.8, http://www.targetscan.org/vert_71/). Among the common target genes between the two databases, the differentially expressed genes (DEGs) between the two survival risk groups were selected to construct an miRNA–target gene network. Kyoto encyclopedia of genes and genomes (KEGG) pathway enrichment analysis of the identified DEGs was performed using the KOBAS tool. Pathways satisfying an FDR <0.05 were considered significant. ## Results ### Identification of two survival risk groups in TCGA multiomics data The results shown here are in part based on data generated by TCGA Research Network: https://www.cancer.gov/tcga. After preprocessing TCGA multiomics data, we obtained 413 miRNAs from miRNA-seq, 21,754 genes from RNA-seq, and 20,089 genes from DNA methylation data as input features for the autoencoder. The three-omics data were stacked together and transformed into 275 new features by an autoencoder of two hidden layers (550 and 1100 nodes, respectively). Each of the 275 transformed features (detailed information in Supplementary Table S6) underwent a univariable Cox-PH regression model. The 13 features (detailed information in Supplementary Table S7) significantly associated with survival (log-rank P-value < 0.05) were then subjected to K-means clustering analysis. The optimal number of clusters was two. TCGA samples were classified into two survival risk groups (G1 and G2). As shown in Figure 2A, better survival was observed in the G1 group compared with the G2 group (log-rank P-value = 5.51e−7). The C-index and Brier scores were 0.766 and 0.172, respectively. These results suggested that this classification identified two different prognostic subtypes in patients with COAD. #### Kaplan–Meier (KM) curves for overall survival (OS) using different strategies Figure 2 Kaplan–Meier (KM) curves for overall survival (OS) using different strategies KM curves for OS in The Cancer Genome Atlas (TCGA) set by using (A) autoencoder, (B) principal component analysis (PCA), (C) nonnegative matrix factorization (NMF), (D) t-distributed stochastic neighbor embedding (t-SNE), and (E) univariable Cox proportional hazards (PH) model. Figure 2 Kaplan–Meier (KM) curves for overall survival (OS) using different strategies KM curves for OS in The Cancer Genome Atlas (TCGA) set by using (A) autoencoder, (B) principal component analysis (PCA), (C) nonnegative matrix factorization (NMF), (D) t-distributed stochastic neighbor embedding (t-SNE), and (E) univariable Cox proportional hazards (PH) model. We compared the autoencoder to other alternative methods, including PCA, NMF, t-SNE, and univariable Cox-PH model, for prognostic classification of COAD samples. Using each of the first three approaches, we obtained 275 transformed features, which is the same as the number of transformed features from the bottleneck layer of the autoencoder, and subjected these to a univariable Cox-PH model. In the Cox-PH approach, we performed univariable Cox-PH analysis on each input feature in the three omics data types. We ranked the features based on C-index values and selected the top 13 features, followed by K-means clustering. As depicted in Figure 2B–E, TCGA dataset was divided into two survival risk groups by each method. PCA (log-rank P-value = 0.191, C-index = 0.633, Brier score = 6.94e−3), NMF (log-rank P-value = 0.203, C-index = 0.708, Brier score = 2.69e−2), t-SNE (log-rank P-value = 0.189, C-index = 0.686, Brier score = 9.24e−4), and Cox-PH (log-rank P-value = 0.203, C-index = 0.791, Brier score = 8.41e−2) model failed to yield a significant log-rank P-value <0.05. We found that only the autoencoder could determine significant survival subgroups in patients with COAD. To validate the robustness of the two inferred survival risk groups obtained by the autoencoder, a classification model was built using the SVM algorithm with CV (Figure 1B). TCGA samples were randomly separated into training (60%) and test (40%) sets. Table 2 shows a high C-index (0.73 ± 0.06), low Brier score (0.14 ± 0.01), and significant log-rank P-value (1.40e−4) for the three-omics training set on average. Similar results were observed for the three-omics test data (log-rank P-value = 2.92e−2, C-index = 0.64 ± 0.11, Brier score = 2.69e−2). With regard to the test of each omics dataset, this model also generated significant but marginally inferior results (Table 2). These results confirmed the robustness of the two inferential survival risk groups to the inherent stochastic processes of automatic encoder construction and training sample selection. Multiomics data proved to be superior to single-omics data for model construction. Table 2 Performance assessment of the classification model using the CV procedure Dataset10-fold CVC-indexBrier scoreLog-rank P-value (geo.mean) Training 3-omics training (60%) 0.73 ± 0.06 0.14 ± 0.01 1.40e−4 RNA only 0.67 ± 0.07 0.15 ± 0.01 1.93e−3 miRNA only 0.65 ± 0.05 0.14 ± 0.01 5.84e−4 Methylation only 0.68 ± 0.09 0.15 ± 0.02 1.07e−3 Test 3-omics test (40%) 0.64 ± 0.11 0.16 ± 0.02 2.92e−2 RNA only 0.63 ± 0.16 0.16 ± 0.02 4.07e−2 miRNA only 0.62 ± 0.12 0.17 ± 0.02 3.96e−2 Methylation only 0.60 ± 0.14 0.18 ± 0.02 4.72e−2 Dataset10-fold CVC-indexBrier scoreLog-rank P-value (geo.mean) Training 3-omics training (60%) 0.73 ± 0.06 0.14 ± 0.01 1.40e−4 RNA only 0.67 ± 0.07 0.15 ± 0.01 1.93e−3 miRNA only 0.65 ± 0.05 0.14 ± 0.01 5.84e−4 Methylation only 0.68 ± 0.09 0.15 ± 0.02 1.07e−3 Test 3-omics test (40%) 0.64 ± 0.11 0.16 ± 0.02 2.92e−2 RNA only 0.63 ± 0.16 0.16 ± 0.02 4.07e−2 miRNA only 0.62 ± 0.12 0.17 ± 0.02 3.96e−2 Methylation only 0.60 ± 0.14 0.18 ± 0.02 4.72e−2 CV, cross-validation ### Survival risk subtypes were successfully validated in three independent validation datasets To study the robustness of the classification model for predicting the prognosis of patients with COAD, we tested the model on three independent cohorts (E-GEOD-17538, E-GEOD-39582, and E-GEOD-28722). The numbers of common mRNAs shared by each validation set and TCGA set were 12959, 12959, and 12478, respectively. We selected the common top 40 features based on ANOVA F-value followed by SVM classification. For E-GEOD-17538, we achieved a high C-index of 0.735, a low Brier score of 0.133, and a log-rank P-value of 8.22e−4 for disease-free survival (DFS) time (Figure 3A). The classification using overall survival (OS) time for E-GEOD-17538 generated the following values: log-rank P-value = 1.11e−2, C-index = 0.653, and Brier score = 0.197 (Figure 3D). Additionally, the classification generated good results for E-GEOD-28722 (log-rank P-value = 1.66e−2, C-index = 0.740, and Brier score = 0.189; Figure 3B) as well as E-GEOD-39582 (log-rank P-value = 1.46e−2, C-index = 0.642, and Brier score = 0.220; Figure 3C) datasets. These results proved the reliability of the two survival risk groups by autoencoders in COAD. The classification using OS time for E-GEOD-28722 generated the following values: log-rank P-value = 2.27e−2, C-index = 0.627, and Brier score = 0.133 (Figure 3E). #### Kaplan-Meier (KM) curves for disease-free survival (DFS) and overall survival (OS) time in different datasets Figure 3 Kaplan-Meier (KM) curves for disease-free survival (DFS) and overall survival (OS) time in different datasets KM curves for DFS time in (A) E-GEOD-17538, (B) E-GEOD-39582, and (C) E-GEOD-28722 datasets and KM curves for OS time in (D) E-GEOD-17538 and (E) E-GEOD-28722 datasets using the survival classification model. Figure 3 Kaplan-Meier (KM) curves for disease-free survival (DFS) and overall survival (OS) time in different datasets KM curves for DFS time in (A) E-GEOD-17538, (B) E-GEOD-39582, and (C) E-GEOD-28722 datasets and KM curves for OS time in (D) E-GEOD-17538 and (E) E-GEOD-28722 datasets using the survival classification model. ### Functional analysis of the two survival subgroups in TCGA dataset Between the two identified survival risk groups, we found 1271 DEGs, including 828 up-regulated and 443 down-regulated genes in the G2 group relative to the G1 group (|log2FC| > 1 and FDR < 0.05, Figure 4A). In total, 10 differentially expressed miRNAs (DEMs), consisting of 8 up-regulated and 2 down-regulated miRNAs (|log2FC| > 1 and FDR < 0.05), and 12 hypermethylated genes (FDR < 0.05 and |delta methylation| > 0.1) were found (Figure 4B,C). #### Heat maps for differentially expressed genes between two survival risk groups Figure 4 Heat maps for differentially expressed genes between two survival risk groups (A) mRNAs, (B) miRNAs, and (C) differentially methylated genes. Figure 4 Heat maps for differentially expressed genes between two survival risk groups (A) mRNAs, (B) miRNAs, and (C) differentially methylated genes. Correlations of methylation β values with gene expression values were evaluated by calculating Pearson’s correlation coefficients. Expression levels of phospholipase A2 group IIA (PLA2G2A) and regenerating family member 4 (REG4) were significantly down-regulated by promoter hypermethylation (Pearson’s correlation coefficient < -0.5 and P-value < 0.001, Table 3). Potential target genes of 10 differentially expressed miRNAs were predicted using the miRDB (prediction score > 80) and TargetScan (Pct > 0.8) databases. The common target genes were mapped to DEGs. In total, seven target genes of mir-133b, including G protein subunit beta 4 (GNB4), protein tyrosine phosphatase receptor type Z1 (PTPR Z1), RUNX1 partner transcriptional co-repressor 1 (RUNX1T1), EPH receptor 7 (EPHA7), glycoprotein M6A (GPM6A), BicC family RNA binding protein 1 (BICC1), and ADAM metallopeptidase with thrombospondin type 1 motif 5 (ADAMTS5) were identified, and a miRNA target gene network was built (Figure 5). #### miR-133b-target gene network Figure 5 miR-133b-target gene network Figure 5 miR-133b-target gene network Table 3 Correlation analysis of RNA expression and methylation data GeneRNA data FDR of RNAMethylation dataCorrelation coefficientp-value Log2FCFDRDiff betaFDR PLA2G2A -1.26 1.14e−5 0.11 2.46e−7 -0.53 3.26e−22 REG4 -1.28 6.64e−3 0.10 1.33e−5 -0.61 3.13e−30 GeneRNA data FDR of RNAMethylation dataCorrelation coefficientp-value Log2FCFDRDiff betaFDR PLA2G2A -1.26 1.14e−5 0.11 2.46e−7 -0.53 3.26e−22 REG4 -1.28 6.64e−3 0.10 1.33e−5 -0.61 3.13e−30 We performed pathway enrichment analysis for DEGs. The up-regulated genes were significantly associated with ECM–receptor interaction, focal adhesion, and PI3K-Akt signaling pathways (Figure 6A, FDR < 0.05), while the down-regulated genes were significantly associated with nitrogen metabolism, mucin type O-glycan biosynthesis, and pentose and glucuronate interconversions (Figure 6B, FDR < 0.05). #### Significantly enriched pathways for differentially expressed genes Figure 6 Significantly enriched pathways for differentially expressed genes (A) top10 significant pathways for up-regulated genes; (B) top 10 significant pathways for down-regulated genes. Figure 6 Significantly enriched pathways for differentially expressed genes (A) top10 significant pathways for up-regulated genes; (B) top 10 significant pathways for down-regulated genes. ## Discussion At present, the survival and prognosis of patients with COAD are poor. Accurate stratification of patients with COAD indicative of prognosis would help to select the optimal therapy for each patient. In the present study, RNA-Seq, miRNA-Seq, and DNA methylation data of the same patients were downloaded from TCGA to develop a survival prediction model using the DL framework. The present model was based only on TCGA database and now the GEO database. Except for TCGA database, no other databases such as GEO database contained cancer samples with ‘RNA-seq’, ‘miRNA’, and ‘methylation’ data; thus, the GEO database could not be used to verify the dimension reduction of the autoencoder model. Two risk subgroups with robust survival differences were inferred by the autoencoder framework based on multiomics data. According to the C-index value, log-rank P-value, and Brier score, the autoencoder algorithm was preferred for selecting survival-related features to other alternative approaches, including PCA, NMF, t-SNE, and the univariable Cox-PH model, emphasizing the utility of this approach. The reliability of the two inferred risk groups was confirmed using the CV procedure. Moreover, this classification model generated good results in terms of C-index, Log-rank P-value, and Brier score on three additional validation datasets containing RNA-seq data, confirming its predictive efficiency. In the present study, the autoencoder model was used to reduce the dimension of multiomics data. The bottleneck feature was the dimension reduction feature of the autoencoder model, which was combined with the traditional risk grouping method. Our emphasis was on the advantages of multiomics integration, and we also compared the autoencoder with other dimensionality reduction methods. We aimed to obtain transformation characteristics by multiomics integration dimension reduction using DL and carried out risk assessment according to the dimensionality reduction features. Our results indicated that the present dimensionality reduction method was superior to other dimensionality reduction methods as well as uniomics research, which could avoid differences in data platforms and omics of multiomics integration. The present study will help improve the prognosis of patients with COAD. The three confirmation cohorts used in the present study consisted of RNA-seq data only. Large cohort studies with good quality samples are anticipated to further validate the predictive utility of this two-risk group-specific model. We performed integrative bioinformatics analysis to search for critical molecules involved in the biology of COAD. A total of 1271 DEGs, 10 DEMs, and 12 hypermethylated genes were identified between the G1 and G2 risk subgroups. Notably, down-regulation of PLA2G2A and REG4 by promoter hypermethylation was observed. Phospholipase A2 is an enzyme related to the hydrolysis of fatty acyl ester. Expression of PLA2G2A and REG4 is of prognostic value for patients with stage II CRC [22,23]. Furthermore, miR-133b and its seven target genes (GNB4, PTPRZ1, RUNX1T1, EPHA7, GPM6A, BICC1, and ADAMTS5) were differentially expressed between the two risk groups. All the eight genes are associated with cell proliferation, invasion, and migration enhancement and poor prognosis in various cancers [24]. miR-133b expression is decreased in CRC and suppresses CRC metastasis, which is associated with the OS of CRC [25,26]. miR-133b down-regulation promotes CRC invasion and migration by modulating CXCR4 [24]. GNB4, a signal transduction molecule involved in the PI3K-AKT pathway, is hypermethylated and down-regulated in both CRC cell lines and colon cancers [27]. PTPRs are a subgroup of tyrosine phosphatases that participate in regulating the cell signaling events of several critical biological processes, such as proliferation, apoptosis, and migration [28]. PTPRZ1 expression is elevated in CRC, implicating its involvement in CRC development [29]. RUNX1T1, a transcriptional co-repressor, acts as a critical regulator of leukemogenesis. RUNXIT1 may play a suppressive role in CRC progression [30]. The involvement of Eph/ephrin signaling in a wide range of biological processes related to tumor progression and metastasis, such as cell attachment, migration, and angiogenesis, has been characterized [31]. The down-regulation of EphA7 by hypermethylation occurs in CRC [32]. GPM6A is a transmembrane protein that plays an important role in the differentiation and neuronal migration of neurons [33]. Overexpression of miR-133b in neuronal cultures leads to the downregulation of GPM6A, suggesting that GPM6A is a novel target for epigenetic regulation during prenatal stress [34]. The gene product Bicc1 is an RNA-binding molecule involved in regulating various proteins at the post-transcriptional level [35]. BICC is a genetic determinant of osteoblastogenesis and bone mineral density [36]. ADAMTS5 is a secreted proteinase that participates in cell adhesion, proliferation, and migration. ADAMTS5 is overexpressed in CRC, promoting CRC metastasis and cancer cell invasion [37]. High expression of ADAMTS5 is a potent biomarker for lymphatic invasion and lymph node metastasis in CRC [38]. These results indicate that these molecules may be used as promising biomarkers and therapeutic targets for COAD. Functional analysis of the up-regulated DEGs revealed significant enrichment of various signaling pathways, such as ECM–receptor interaction, focal adhesion, and the PI3K-Akt signaling pathway. The down-regulated genes were significantly involved in several signaling pathways associated with glucose metabolism. A rich body of evidence has shown that the PI3K-Akt signaling pathway plays an essential role in the progression of colon cancer and is a promising target for cancer treatment [39,40]. Glucose intake is high in cancer cells together with the production of lactic acid [41]. However, these results were found based on bioinformatic analysis. We hope that the results of the present study will be beneficial for elucidating the possible pathogenesis of COAD. Here, we performed an extensive study based on published data and bioinformatic analysis. The results of the present study should be further validated using in vitro or in vivo models. We hope that the results of this study will be beneficial to future research. ## Conclusion The present study robustly distinguishes survival subpopulations of patients with COAD using DL-based multiomics integration. This classification is of direct clinical importance and contributes to improved outcomes in patients with COAD. miR-133b, GNB4, PTPRZ1, RUNX1T1, EPHA7, GPM6A, BICC1, and ADAMTS5 may be important molecular targets for COAD. ## Data Availability All data used and/or analyzed in this study are available from the TCGA database (https://gdc-portal.nci.nih.gov/) or the EBI Array database (https://www.ebi.ac.uk/arrayexpress/). ## Competing Interests The authors declare that there are no competing interests associated with the manuscript. ## Funding The authors declare that there are no sources of funding to be acknowledged. ## Author Contribution J.D.L. designed and performed the research, analyzed the data, and wrote the manuscript. J.J.W. and X.J.S. participated in the collection of clinical samples. F.F.L. and S.X.G. participated in the experimental design and provided financial and instrumental support. All authors have read and approved the final manuscript. ## Abbreviations • • DFS disease-free survival • • DL deep learning • • NMF nonnegative matrix factorization • • OS overall survival • • PCA principal component analysis • • TCGA The Cancer Genome Atlas • • t-SNE t-distributed stochastic neighbor embedding ## References References 1. Benson A.B. , Venook A.P. , Al-Hawary M.M. , Cederquist L. , Chen Y.-J. , Ciombor K.K. et al. ( 2018 ) NCCN guidelines insights: colon cancer, version 2.2018 . J. Natl. Comprehensive Cancer Network 16 , 359 369 [PubMed] 2. Edwards B.K. , Ward E. , Kohler B.A. , Eheman C. , Zauber A.G. , Anderson R.N. et al. ( 2010 ) Annual report to the nation on the status of cancer, 1975‐2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates . Cancer: Interdisciplinary Int. J. Am. Cancer Soc. 116 , 544 573 3. Anguraj S. , Lyssiotis C.A. , Krisztian H. , Collisson E.A. , Gibb W.J. , Stephan W. et al. ( 2013 ) A colorectal cancer classification system that associates cellular phenotype and responses to therapy . Nat. Med. 19 , 619 625 [PubMed] 4. Fatemeh H. , Saeed A. , K. and Mehdi E. ( 2014 ) Clinicopathological features of colon adenocarcinoma in Qazvin, Iran: a 16 year study . Asian Pacific J. Cancer Prevention APJCP 15 , 951 5. Huang S. , Chaudhary K. and Garmire L.X. ( 2017 ) More is better: recent progress in multi-omics data integration methods . Front. Genet. 8 , 84 [PubMed] 6. Lan K. , Wang D.-t. , Fong S. , Liu L.-s. , Wong K.K. and Dey N. ( 2018 ) A survey of data mining and deep learning in bioinformatics . J. Med. Syst. 42 , 139 [PubMed] 7. Chaudhary K. , Poirion O.B. , Lu L. and Garmire L.X. ( 2018 ) Deep Learning-Based Multi-Omics Integration Robustly Predicts Survival in Liver Cancer . Clin. Cancer Res. 24 , 1248 1259 [PubMed] 8. Zhang L. , Lv C. , Jin Y. , Cheng G. , Fu Y. , Yuan D. et al. ( 2018 ) Deep learning-based multi-omics data integration reveals two prognostic subtypes in high-risk neuroblastoma . Front. Genet. 9 , 9. Poirion O.B. , Chaudhary K. and Garmire L.X. ( 2018 ) Deep Learning data integration for better risk stratification models of bladder cancer . AMIA Summits on Translational Sci. Proc. 2017 , 197 10. IlluminaHumanMethylation450kanno HK ( 2014 ) ilmn12. hg19: Annotation for Illumina's 450k methylation arrays . R package version 02 1 11. Rousseeuw P.J. ( 1987 ) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis . J. Comput. Appl. Math. 20 , 53 65 12. Rocci R. and Vichi M. ( 2008 ) Two-mode multi-partitioning . Comput. Statistics Data Analysis 52 , 1984 2003 13. O'brien P.C. ( 1988 ) Comparing two samples: extensions of the t, rank-sum, and log-rank tests . J. Am. Statist. Assoc. 83 , 52 61 14. Simmons M.N. , Ching C.B. , M.K. , Park C.H. and Gill I.S. ( 2010 ) Kidney tumor location measurement using the C index method . J. Urol. 183 , 1708 1713 [PubMed] 15. Gerds T.A. and Schumacher M. ( 2006 ) Consistent estimation of the expected Brier score in general survival models with right‐censored event times . Biometrical J. 48 , 1029 1040 16. Jolliffe I.T. and J. ( 2016 ) Principal component analysis: a review and recent developments . Philos. Transact. Royal Soc. A: Mathemat. Phys. Eng. Sci. 374 , 20150202 17. Fu X. , Huang K. , Sidiropoulos N.D. and Ma W.-K. ( 2019 ) Nonnegative matrix factorization for signal and data analytics: Identifiability, algorithms, and applications . IEEE Signal Process. Mag. 36 , 59 80 18. Gisbrecht A. , Schulz A. and Hammer B. ( 2015 ) Parametric nonlinear dimensionality reduction using kernel t-SNE . Neurocomputing 147 , 71 82 19. Bhapkar V.P. ( 1980 ) 11 ANOVA and MANOVA: Models for categorical data . Handbook Statistics 1 , 343 387 20. Love M.I. , Huber W. and Anders S. ( 2014 ) Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2 . Genome Biol. 15 , 550 [PubMed] 21. Wong N. and Wang X. ( 2014 ) miRDB: an online resource for microRNA target prediction and functional annotations . Nucleic Acids Res. 43 , D146 D152 [PubMed] 22. Buhmeida A. , Bendardaf R. , Hilska M. , Laine J. , Collan Y. , Laato M. et al. ( 2009 ) PLA2 (group IIA phospholipase A2) as a prognostic determinant in stage II colorectal carcinoma . Ann. Oncol. 20 , 1230 1235 [PubMed] 23. Zhu X. , Han Y. , Yuan C. , Tu W. , Qiu G. , Lu S. et al. ( 2015 ) Overexpression of Reg4, alone or combined with MMP-7 overexpression, is predictive of poor prognosis in colorectal cancer . Oncol. Rep. 33 , 320 328 [PubMed] 24. Duan F.T. , Qian F. , Fang K. , Lin K.Y. , Wang W.T. and Chen Y.Q. ( 2013 ) miR-133b, a muscle-specific microRNA, is a novel prognostic marker that participates in the progression of human colorectal cancer via regulation of CXCR4 expression . Mol. Cancer 12 , 164 [PubMed] 25. Pinar A.A. , Susanne E. , Iryna K. , Stefano C. , Ozata D.M. , Hong X. et al. ( 2011 ) miR-185 and miR-133b deregulation is associated with overall survival and metastasis in colorectal cancer . Int. J. Oncol. 39 , 311 318 [PubMed] 26. Lin C. , Li X. , Zhang Y. , Hu G. , Guo Y. , Zhou J. et al. ( 2014 ) TAp63 suppress metastasis via miR-133b in colon cancer cells . Br. J. Cancer 110 , 2310 [PubMed] 27. Wang X. , Kuang Y.-Y. and Hu X.-T. ( 2014 ) Advances in epigenetic biomarker research in colorectal cancer . World J. Gastroenterol. 20 , 4276 [PubMed] 28. Du Y. and Grandis J.R. ( 2015 ) Receptor-type protein tyrosine phosphatases in cancer . Chin. J. Cancer 34 , 61 [PubMed] 29. Laczmanska I. , Karpinski P. , Gil J. , Laczmanski L. , Bebenek M. and M.M. ( 2016 ) High PTPRQ expression and its relationship to expression of PTPRZ1 and the presence of KRAS mutations in colorectal cancer tissues . Anticancer Res. 36 , 677 681 [PubMed] 30. Alfayez M. , Vishnubalaji R. and Alajez N.M. ( 2016 ) Runt-related Transcription Factor 1 (RUNX1T1) Suppresses Colorectal Cancer Cells Through Regulation of Cell Proliferation and Chemotherapeutic Drug Resistance . Anticancer Res. 36 , 5257 5264 [PubMed] 31. Giaginis C. , Tsoukalas N. , Bournakis E. , Alexandrou P. , Kavantzas N. , Patsouris E. et al. ( 2014 ) Ephrin (Eph) receptor A1, A4, A5 and A7 expression in human non-small cell lung carcinoma: associations with clinicopathological parameters, tumor proliferative capacity and patients’ survival . BMC Clin. Pathol. 14 , 8 [PubMed] 32. Wang J. , Kataoka H. , Suzuki M. , Sato N. , Nakamura R. , Tao H. et al. ( 2005 ) Downregulation of EphA7 by hypermethylation in colorectal cancer . Oncogene 24 , 5637 [PubMed] 33. Michibata H. , Okuno T. , Konishi N. , Kyono K. , Wakimoto K. , Aoki K. et al. ( 2009 ) Human GPM6A is associated with differentiation and neuronal migration of neurons derived from human embryonic stem cells . Stem Cells Dev. 18 , 629 640 [PubMed] 34. Monteleone M.C. , E. , Pallarés M.E. , Antonelli M.C. , Frasch A.C. and Brocco M.A. ( 2014 ) Prenatal stress changes the glycoprotein GPM6A gene expression and induces epigenetic changes in rat offspring brain . Epigenetics 9 , 152 160 [PubMed] 35. Lian P. , Li A. , Li Y. , Liu H. , Liang D. , Hu B. et al. ( 2014 ) Loss of polycystin-1 inhibits Bicc1 expression during mouse development . PLoS ONE 9 , e88816 [PubMed] 36. Mesner L.D. , Brianne R. , Yi-Hsiang H. , Ani M. , Eric L. , Bryda E.C. et al. ( 2014 ) Bicc1 is a genetic determinant of osteoblastogenesis and bone mineral density . J. Clin. Invest. 124 , 2736 2749 [PubMed] 37. Yu L. , Lu Y. , Han X. , Zhao W. , Li J. , Mao J. et al. ( 2016 ) microRNA-140-5p inhibits colorectal cancer invasion and metastasis by targeting ADAMTS5 and IGFBP5 . Stem Cell Res. Therapy 7 , 180 38. Haraguchi N. , Ohara N. , Koseki J. , Takahashi H. , Nishimura J. , Hata T. et al. ( 2017 ) High expression of ADAMTS5 is a potent marker for lymphatic invasion and lymph node metastasis in colorectal cancer . Mol. Clin. Oncol. 6 , 130 134 [PubMed] 39. Xiangliang Z. , Huijuan S. , Hongsheng T. , Zhiyuan F. , Jiping W. and Shuzhong C. ( 2015 ) miR-218 inhibits the invasion and migration of colon cancer cells by targeting the PI3K/Akt/mTOR signaling pathway . Int. J. Mol. Med. 35 , 1301 1308 [PubMed] 40. Ke T.W. , Wei P.L. , Yeh K.T. , Chen T.L. and Cheng Y.W. ( 2015 ) MiR-92a Promotes Cell Metastasis of Colorectal Cancer Through PTEN-Mediated PI3K/AKT Pathway . Ann. Surg. Oncol. 22 , 2649 2655 [PubMed] 41. Fang S. and Fang X. ( 2016 ) Advances in glucose metabolism research in colorectal cancer . Biomed. Rep. 5 , 289 295 [PubMed]
2021-03-02 10:20:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.441377729177475, "perplexity": 8937.891546044653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00562.warc.gz"}
https://www.rdocumentation.org/packages/lillies/versions/0.2.4/topics/lyl
# lyl 0th Percentile ##### Life Years Lost at one specific age. lyl estimates remaining life expectancy and Life Years Lost for a given population after a specific age age_speficic and restrictied to a maximum theoretical age $\tau$. ##### Usage lyl(data, t0 = NULL, t, status, age_specific, censoring_label = "Alive", death_labels = "Dead", tau = 100) ##### Arguments data A dataframe, where each raw represents a person. The dataframe will have a time-to-event format with at least two variables: age at end of follow-up (t) and status indicator with death/censoring (status). t0 Age at start of the follow-up time. Default is NULL, which means all subjects are followed from birth. For delayed entry, t0 indicates age at beginning of follow-up. t Age at the end of the follow-up time (death or censoring). status Status indicator, normally 0=alive, 1=dead. Other choices are TRUE/FALSE (TRUE = death) or 1/2 (2=death). For multiple causes of death (competing risks analysis), the status variable will be a factor, whose first level is treated as censoring; or a numeric variable, whose lowest level is treated as censoring. In the latter case, the label for censoring is censoring_label ("Alive" by default). age_specific Specific age at which the Life Years Lost have to be estimated. censoring_label Label for censoring status ("Alive" by default). death_labels Label for event status. For only one cause of death, "Dead" is the default. For multiple causes, the default are the values given in variable status. tau Remaining life expectancy and Life Years Lost are estimated restrictied to a maximum theoretical age $\tau$ ($\tau$=100 years by default). ##### Value A list with class "lyl" containing the following components: • data: Data frame with 3 variables and as many observations as the original data provided to estimate Life Years Lost: t0, t, and status • LYL: Data frame with 1 observation and at least 3 variables: age which corresponds to age_spefific; life_exp which is the estimated remaining life expectancy at age age_specific years and before age tau years; and one variable corresponding to the estimated Life Years Lost for each specific cause of death. If only one cause of death is considered (no competing risks), this variable is Dead and includes the total overall Life Years Lost • tau: Maximum theoretical age $\tau$ • age_specific: Specific age at which the Life Years Lost have been estimated • data_plot: A data frame in long format with 3 variables time, cause, and cip used to create a Figure of Life Years Lost with function plot. • censoring_label: Label for censoring status • death_labels: Label(s) for death status • competing_risks: Logical value (TRUE = more than one cause of death (competing risks)) • type: Whether the estimation is at "age_specific" or "age_range". ##### References • Andersen PK. Life years lost among patients with a given disease. Statistics in Medicine. 2017;36(22):3573- 3582. • Andersen PK. Decomposition of number of life years lost according to causes of death. Statistics in Medicine. 2013;32(30):5278-5285. • lyl_range for estimation of Life Years Lost for a range of different ages. • lyl_ci to estimate bootstrapped confidence intervals. • lyl_diff to compare Life Years Lost for two populations. • summary.lyl to summarize objects obtained with function lyl. • plot.lyl to plot objects obtained with function lyl. • lyl ##### Examples # NOT RUN { # Load simulated data as example data(simu_data) # Estimate remaining life expectancy and Life Years # Lost after age 45 years and before age 95 years lyl_estimation <- lyl(data = simu_data, t = age_death, status = death, age_specific = 45, tau = 95) # Summarize and plot the data summary(lyl_estimation) plot(lyl_estimation) # Estimate remaining life expectancy and Life Years # Lost due to specific causes of death after age 45 # years and before age 95 years lyl_estimation2 <- lyl(data = simu_data, t = age_death, status = cause_death, age_specific = 45, tau = 95) # Summarize and plot the data summary(lyl_estimation2) plot(lyl_estimation2) # } Documentation reproduced from package lillies, version 0.2.4, License: MIT + file LICENSE ### Community examples Looks like there are no examples yet.
2020-01-29 15:29:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19727708399295807, "perplexity": 8980.125507189234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00144.warc.gz"}
https://www.tutorialspoint.com/longest-continuous-subarray-with-absolute-diff-less-than-or-equal-to-limit-in-cplusplus
Longest Continuous Subarray With Absolute Diff Less Than or Equal to Limit in C++ Suppose we have an array of integers called nums and an integer limit, we have to find the size of the longest non-empty subarray such that the absolute difference between any two items of this subarray is less than or equal to the given limit. So, if the input is like nums = [8,2,4,7], limit = 4, then the output will be 2, this is because − • [8] so |8-8| = 0 <= 4. • [8,2] so |8-2| = 6 > 4. • [8,2,4] so |8-2| = 6 > 4. • [8,2,4,7] so |8-2| = 6 > 4. • [2] so |2-2| = 0 <= 4. • [2,4] so |2-4| = 2 <= 4. • [2,4,7] so |2-7| = 5 > 4. • [4] so |4-4| = 0 <= 4. • [4,7] so |4-7| = 3 <= 4. • [7] so |7-7| = 0 <= 4. Finally, the size of the longest subarray is 2. To solve this, we will follow these steps − • ret := 0, i := 0, j := 0 • Define one deque maxD and another deque minD • n := size of nums • for initialize i := 0, when i < n, update (increase i by 1), do − • while (not maxD is empty and last element of maxD < nums[i]), do − • delete last element from maxD • while (not minD is empty and last element of minD > nums[i]), do − • delete last element from minD • insert nums[i] at the end of maxD • insert nums[i] at the end of minD • while (first element of maxD - first element of minD) > k, do − • if nums[j] is same as first element of maxD, then− • delete front element from maxD • if nums[j] is same as first element of minD, then − • delete front element from minD • (increase j by 1) • ret := maximum of ret and (i - j + 1) • return ret Example Let us see the following implementation to get better understanding − Live Demo #include <bits/stdc++.h> using namespace std; class Solution { public: int longestSubarray(vector<int>& nums, int k) { int ret = 0; int i = 0; int j = 0; deque<int> maxD; deque<int> minD; int n = nums.size(); for (int i = 0; i < n; i++) { while (!maxD.empty() && maxD.back() < nums[i]) maxD.pop_back(); while (!minD.empty() && minD.back() > nums[i]) minD.pop_back(); maxD.push_back(nums[i]); minD.push_back(nums[i]); while (maxD.front() - minD.front() > k) { if (nums[j] == maxD.front()) maxD.pop_front(); if (nums[j] == minD.front()) minD.pop_front(); j++; } ret = max(ret, i - j + 1); } return ret; } }; main(){ Solution ob; vector<int> v = {7,8,2,4}; cout << (ob.longestSubarray(v, 4)); } Input {7,8,2,4}, 4 Output 2
2023-01-31 15:02:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43307700753211975, "perplexity": 7446.6578117437575}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00206.warc.gz"}
http://www.math.titech.ac.jp/~kagei/papers.html
## CONCEPTPapers ■Published Papers: 1. Yoshiyuki Kagei and Yuka Teramoto, On the spectrum of the linearized operator around compressible Couette flows between two concentric cylinders, J. Math. Fluid Mech., accepted. 2. Abulizi Aihaiti and Yoshiyuki Kagei, Asymptotic behavior of solutions of the compressible Navier-Stokes equations in a cylinder under the slip boundary condition, Math. Methods Appl. Sci., {\bf 42} (2019), no. 10, pp. 3428–-3464. 3. Yoshiyuki Kagei and Takaaki Nishida, Traveling waves bifurcating from plane Poiseuille flow of the compressible Navier-Stokes equation, Arch. Rational Mech. Anal., {\bf 231} (2019), no. 1, pp. 1--44. 4. Yoshiyuki Kagei, Takaaki Nishida and Yuka Teramoto, On the spectrum for the artificial compressible system, J. Differential Equations, {\bf 264} (2018), no. 2, pp. 897--928. 5. Shota Enomoto and Yoshiyuki Kagei, Asymptotic behavior of the linearized semigroup at space-periodic stationary solution of the compressible Navier-Stokes equation, J. Math. Fluid Mech., {\bf 19} (2017), no. 4, pp. 739--772. 6. Yoshiyuki Kagei and Takaaki Nishida, On Chorin's method for stationary solutions of the Oberbeck-Boussinesq equation, J. Math. Fluid Mech., {\bf 19} (2017), no. 2, pp. 345--365. DOI: 10.1007/s00021-016-0284-3 7. Abulizi Aihaiti, Shota Enomoto, Yoshiyuki Kagei, Large time behavior of solutions to the compressible Navier-Stokes equations in an infinite layer under slip boundary condition, Math. Models Meth. Appl. Sci., {\bf 26} (2016), no.14, pp.2617--2649. DOI: http://dx.doi.org/10.1142/S0218202516500615 8. Yoshiyuki Kagei and Masatoshi Okita, Asymptotic profiles for the compressible Navier-Stokes equations on the whole space, J. Math. Anal. Appl., {\bf 445} (2017), no. 1, pp. 297--317. http://doi.org/10.1016/j.jmaa.2016.07.024 9. Yoshiyuki Kagei and Michael Ruzicka, The Oberbeck-Boussinesq approximation as a constitutive limit, Continuum Mechanics and Thermodynamics {\bf 28} (2016), no. 5, pp. 1411--1419. DOI 10.1007/s00161-015-0483-9 10. Yoshiyuki Kagei and Ryouta Oomachi, Stability of time periodic solution of the Navier-Stokes equation on the half-space under oscillatory moving boundary condition, J. Differential Equations {\bf 261} (2016), pp. 3366--3413. 11. Reika Aoyama and Yoshiyuki Kagei, Spectral properties of the semigroup for the linearized compressible Navier-Stokes equation around a parallel flow in a cylindrical domain, Advances in Differential Equations {\bf 21} (2016), no. 3-4, pp. 265--300. 12. Reika Aoyama and Yoshiyuki Kagei, Large time behavior of solutions to the compressible Navier-Stokes equations around a parallel flow in a cylindrical domain, Nonlinear Analysis Series A: Theory, Methods and Applications {\bf 127} (2015), pp. 362--396. doi:10.1016/j.na.2015.07.009 13. Yoshiyuki Kagei and Naoki Makio, Spectral properties of the linearized semigroup of the compressible Navier-Stokes equation on a periodic layer, Publ. Res. Inst. Math. Sci., {\bf 51}, no. 2 (2015), pp. 337--372. 14. Yoshiyuki Kagei and Takaaki Nishida, Instability of plane Poiseuille flow in viscous compressible gas, J. Math. Fluid Mech., vol. 17 (2015),no.1, pp. 129--143. DOI: 10.1007/s00021-014-0191-4 15. Yoshiyuki Kagei and Kazuyuki Tsuda, Existence and stability of time periodic solution to the compressible Navier-Stokes equation for time periodic external force with symmetry, J. Differential Equations, vol. 258 (2015), pp. 399--444. 16. Jan Brezina and Yoshiyuki Kagei, Spectral properties of the linearized compressible Navier-Stokes equation around time-periodic parallel flow, J. Differential Equations, vol. 255 (2013), no. 6, pp. 1132--1195. 17. Yoshiyuki Kagei and Yasunori Maekawa,On asymptotic behaviors of solutions to parabolic systems modelling chemotaxis, J. Differential Equations, vol. 253 (2012), no.11, pp. 2951--2992. 18. Yoshiyuki Kagei, Asymptotic behavior of solutions to the compressible Navier-Stokes equation around a parallel flow, Arch. Rational Mech. Anal. vol. 205 (2012), no. 2, pp. 585--650. 19. Jan Brezina and Yoshiyuki Kagei, Decay properties of solutions to the linearized compressible Navier-Stokes equation around time-periodic parallel flow, Math. Models Meth. Appl. Sci., vol. 22 (2012), 1250007 (53 pages). 20. Yoshiyuki Kagei, Global existence of solutions to the compressible Navier-Stokes equation around parallel flows, J. Differential Equations, vol. 251 (2011), no. 11, pp. 3248--3295. 21. Yoshiyuki Kagei and Yasunori Maekawa, Asymptotic behaviors of solutions to evolution equations in the presence of translation and scaling invariance, J. Functional Analysis, vol. 260 (2011), no. 10, pp. 3036--3096. 22. Yoshiyuki Kagei, Asymptotic behavior of solutions of the compressible Navier-Stokes equation around the plane Couette flow, J. Math. Fluid Mech., vol. 13 (2011), no. 1, pp. 1--31. 23. Yoshiyuki Kagei, Yu Nagafuchi and Takeshi Sudou, Decay estimates on solutions of the linearized compressible Navier-Stokes equation around a Poiseuille type flow, Journal of Math-for-Industory, vol. 2 (2010A), pp. 39--56. Correction to "Decay estimates on solutions of the linearized compressible Navier-Stokes equation around a Poiseuille type flow" in J. Math-for-Ind., vol. 2 (2010A), pp. 39--56, J. Math-for-Ind., vol. 2 (2010B), pp. 235. 24. Yuya Ishihara and Yoshiyuki Kagei, Large time behavior of the semigroup on $L^p$ spaces associated with the linearized compressible Navier-Stokes equation in a cylindrical domain, J. Differential Equations, vol. 248 (2010), no. 2, pp. 252--286. 25. Yoshiyuki Kagei and Takumi Nukumizu, Asymptotic behavior of solutions to the compressible Navier-Stokes equation in a cylindrical domain, Osaka J. Math., vol. 45 (2008), no. 4, pp. 987--1026. 26. Yoshiyuki Kagei, Large time behavior of solutions to the compressible Navier-Stokes equation in an infinite layer, Hiroshima Math. J., vol. 38 (2008), no. 1, pp. 95 -- 124. 27. Yoshiyuki Kagei, Asymptotic behavior of the semigroup associated with the linearized compressible Navier-Stokes equation in an infinite layer, Publ. Res. Inst. Math. Sci., vol. 43 (2007), no. 3, pp. 763--794. 28. Resolvent estimates for the linearized compressible Navier-Stokes equation in an infinite layer, Yoshiyuki Kagei, Funkcial. Ekvac., vol.50 (2007),no. 2, pp. 287--337. 29. Stability of planar stationary solutions to the compressible Navier-Stokes equation on the half space, Yoshiyuki Kagei and Shuichi Kawashima, Commun. Math. Phys., vol.266 (2006), no. 2, pp.401 -- 430. 30. Yoshiyuki Kagei and Shuichi Kawashima, Local solvability of initial boundary value problem for a quasilinear hyperbolic-parabolic system, Journal of Hyperbolic Differential Equations, vol.3 (2006), no. 2, pp.195 -- 232. 31. Yoshiyuki Kagei and Takayuki Kobayashi, Asymptotic behavior of solutions to the compressible Navier-Stokes equations on the half space, Arch. Ration. Mech. Anal. vol. 177 (2005), no. 2, pp. 231 -- 330. 32. A limit problem in natural convection, Yoshiyuki Kagei, Michael Ruzicka and Gudrun Thaeter, Nonlinear Differential Equations Appl., vol. 13 (2006), no. 4, pp. 447--467. 33. On large-time behavior of solutions to the compressible Navier-Stokes equations in the half space in $R^3$, Yoshiyuki Kagei, Takayuki Kobayashi, Arch. Ration. Mech. Anal. , vol. 165 (2002), no. 2, pp.89--159. 34. Yoshiyuki Kagei, Invariant manifolds and long-time asymptotics for the Vlasov-Poisson-Fokker-Planck equation, SIAM J. Math. Anal., vol. 33 (2001), no. 2, pp.489--507. 35. Yoshiyuki Kagei, Michael Ruzicka, Gudrun, Natural convection with dissipative heating, Commun. Math. Phys. , vol. 214 (2000), no. 2, pp.287--313. 36. Yoshiyuki Kagei and Wolf von Wahl, Asymptotic stability of steady flows in infinite layers of viscous incompressible fluids in critical cases of stability, Indiana Univ. Math. J., vol. 48 (1999), no. 3, pp.1083--1110. 37. Yoshiyuki Kagei and Wolf von Wahl, The Eckhaus criterion for convection roll solutions of the Oberbeck-Boussinesq equations, Internat. J. Non-Linear Mech. , vol. 32 (1997), no. 3, pp.563--620. 38. Yoshiyuki Kagei and Wolf von Wahl, Asymptotic stability of higher order norms in terms of asymptotic energy stability for viscous incompressible fluid flows heated from below, Japan J. Indust. Appl. Math. , vol. 13 (1996), no. 1, pp.33 --49. 39. Yoshiyuki Kagei, Attractors for two-dimensional equations of thermal convection in the presence of the dissipation function, Hiroshima Math. J., vol. 25 (1995), no. 2, pp.251--311. 40. Yoshiyuki Kagei and Wolf von Wahl, Stability of higher norms in terms of energy-stability for the Boussinesq equations: remarks on the asymptotic behaviour of convection-roll-type solutions, Differential Integral Equations, vol.7 (1994), pp.921--948. 41. Yoshiyuki Kagei, On weak solutions of nonstationary Boussinesq equations, Differential Integral Equations, vol.6 (1993), pp.587--611. 42. Yoshiyuki Kagei and Maria Skowron, Nonstationary flows of nonsymmetric fluids with thermal convection, Hiroshima Math. J, vol. 23 (1993), no. 2, pp.343--363. 43. Zhi Min Chen, Yoshiyuki Kagei and Tetsuro Miyakawa, Remarks on stability of purely conductive steady states to the exterior Boussinesq problem, Adv. Math. Sci. Appl., vol. 1 (1992), no. 2, pp. 411--430. ■Proceedings: 1. Yoshiyuki Kagei, On asymptotic behavior of solutions of the compressible Navier-Stokes equation around a parallel flow, Proceedings of the conference "Hyperbolic Problems: Theory, Numerics and Applications" (HYP2010), Series in Contemporary Applied Mathematics CAM 17 (Editors: Tatsien Li and Song Jiang), vol. 1, pp. 44--59, July, 2012, Higher Education Press (Beijing). 2. Yoshiyuki Kagei, On large time behavior of solutions to the compressible Navier-Stokes equation in an infinite layer, Yoshiyuki Kagei, to appear in the Proceedings of "Mathematical Analysis on the Navier-Stokes equations and Related Topics, Past and Future, --In memory of Professor Tetsuro Miyakawa", Gakuto International Series Mathematical Sciences and Applications., vol. 35 (2011), pp. 71--90. 3. Yoshiyuki Kagei, On two-dimensional equations of thermal convection in the presence of the dissipation function, Theory of the Navier-Stokes equations, Ser. Adv. Math. Appl. Sci., 47, pp.72--85, 1998.
2020-09-21 06:40:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6998357176780701, "perplexity": 2194.000778447979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00348.warc.gz"}