{ "paper_id": "C65-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:12:30.110974Z" }, "title": "965 International Conference on Computational Linguistics MODELS OF LEXICAL DECAY", "authors": [ { "first": "D", "middle": [], "last": "Kleinecke", "suffix": "", "affiliation": { "laboratory": "", "institution": "SANTA BARBARA", "location": { "postCode": "CALIFORNIA 9310Z" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Lexical decay is the phenomenon underlying the dating techniques known as \"glottochronology\" and\"lexicostatistics.\" Much of the contraversial nature of work in this field is the result of extremely imprecise foundations and lack of attention to the underlying statistical and semantic models. A satisfactory semantic model can be found in the concept of semantic atom. Notwithstanding a number of philosophical objections, the semantic atom is an operationally feasible support for a lexicon which is a semantic subset of all possible meanings and at the same time, exhausts the vocabulary of a language. Lexical decay is the process by which the lexical item covering an atom is replaced by another lexical item. Exponential lexical preservation is, in this model, directly analogous to decay phenomena in nuclear physics. Consistency requires that the decay process involved in exponentially preserved vocabularies be a Poisson process. This shows how to form test vocabularies for dating and proves that presently used vocabularies are not correctly formed. Dialectation studies show that historically diverging populations must be modelled by correlated Poisson processes. Definitive statistical treatment of these questions is not possible at this time, but much desirable research can be indicated.", "pdf_parse": { "paper_id": "C65-1015", "_pdf_hash": "", "abstract": [ { "text": "Lexical decay is the phenomenon underlying the dating techniques known as \"glottochronology\" and\"lexicostatistics.\" Much of the contraversial nature of work in this field is the result of extremely imprecise foundations and lack of attention to the underlying statistical and semantic models. A satisfactory semantic model can be found in the concept of semantic atom. Notwithstanding a number of philosophical objections, the semantic atom is an operationally feasible support for a lexicon which is a semantic subset of all possible meanings and at the same time, exhausts the vocabulary of a language. Lexical decay is the process by which the lexical item covering an atom is replaced by another lexical item. Exponential lexical preservation is, in this model, directly analogous to decay phenomena in nuclear physics. Consistency requires that the decay process involved in exponentially preserved vocabularies be a Poisson process. This shows how to form test vocabularies for dating and proves that presently used vocabularies are not correctly formed. Dialectation studies show that historically diverging populations must be modelled by correlated Poisson processes. Definitive statistical treatment of these questions is not possible at this time, but much desirable research can be indicated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper is an attempt to establish the method of dating by lexical decay upon an adequate theoretical foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The method discussed is that invented by Swadesh (1) over a decade ago and usually known as glottochronology or lexicostatistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In the intervening years it has been widely applied, but often to the accompaniment of much confusion and contraversy. It seems that much of the confusion can be removed by a rigorous treatment of the phenomenological model and careful application of statistics. The contraversy can be removed only by the completion of a sufficient number of supporting studies. Rigorous formulation permits us to'pinpoint what studies are needed and what conclusions are being sought.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Granting (as not everyone seems willing to do) that the basic fact of \"uniform\" lexical decay occurs, the problem to be attacked is that of correctly formulating models for lexical decay and of correctly deriving statistical consequences from these models. In what follows, we will construct a set of models which seem to fit the needs of the method of dating by lexical decay, Our approach is strictly pragmatic, that is, we construct the model we need without concerning ourselves about its a priori reasonableness. Later we try to assemble some arguments which justify the model. In no sense is this an approach for first principles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The analogy between lexical decay and the decay phenomena of nuclear physics has been often noted and dismissed. In the present paper, we insist that this analogy is much more than an analogy; it is, on the first level, an identity. The only alternative to this hypothesis seems to be a kind of mystic faith that the decay occurs but without palpable manipulable principles. The burden of the proof that the identity is false lies with the doubter and we will make no further demonstration of its validity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Decay phenomena in nuclear physics are governed by relatively simple, well understood principles. To apply these results to lexical decay we first establish the concepts of a semantic atom and a set of independent semantic atoms. The observed fact of exponential decay of vocabulary then is accounted for by assuming that the lexical item covering an atom decays according to a Poisson process. Oenerally speaking, the converse of this is also true,and only a Poisson process would produce exponential decay. From these considerations, we can draw many conclusions about how to and how not to construct test vocabularies for dating purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -Z", "sec_num": null }, { "text": "With this model in hand, we can draw conclusions of a statistical nature. For example, we can develop formulas for the proper method of dating the split between three or more languages and for good estimators in more complex situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -Z", "sec_num": null }, { "text": "We can construct an inprecise heuristic model for the dynamic semantics underlying the Poisson process. So long as the first order theory is adequate, this is much in the nature of a curiosity. It seems, however, that first order theory is not adequate. Actually, such a conclusion is really premature because the kind of verification studies needed have not been made. Assuming the pessimistic conclusion, we have to construct second (or higher) order theories to account for the inadequacies of first order theory. At the moment, we have no useful results in this direction--the problem merges into the problem of dialectation. Probably the most important service we can render is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -Z", "sec_num": null }, { "text": "to indicate exactly what kind of detailed studies are needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -Z", "sec_num": null }, { "text": "It is very easy to raise objections of a philosophical nature to the concept of a semantic atom. In this paper we will simply ignore these objections and define semantic atom in an operational way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Atoms", "sec_num": null }, { "text": "There are also operational difficulties, but these seem to be sur- We conclude that, with adequate precautions, semantic atoms can be operationally feasible even if true rigor is impossible. In the case of little-known languages, there is much more chance for error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Atoms", "sec_num": null }, { "text": "We should encourage collectors of vocabularies to improve the precision of their definitions so that the atom in question can be identified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Atoms", "sec_num": null }, { "text": "We assume that lexical decay, for a set of independent semantic atoms, is a Poisson process. That is, it satisfies three conditions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decay Process", "sec_num": null }, { "text": "i. Each atom decays independently of all the other atoms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decay Process", "sec_num": null }, { "text": "Each atom decays independently of its history of earlier de c ay.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "There is a constant k such that for each atom the probability of one decay in a short time interval At is kAt, and the probability of more than one decay is negligible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "It is rather easy to deduce that for longer time intervals t , the probability of not decaying is exp(-It) , and if there are N atoms, the expected number of undecayed atoms after time t is Nexp(-lt).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "This formula is the usual formula for lexical decay. It should be pointed out that it was tested, statistically, in the first publication by Swadesh, and it failed to pass. The difficulty is probably due to the word list used which is not an independent set of atoms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "If we examine the assumptions made so far, we see that any list of semantic atoms can be used if they are: (1) independent; and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "(2) assured of existence throughout the time in question. There is no satisfactory a priori basis for assuming that some kinds of semantic atoms decay at different rates than other kinds, and it is doubtful if enough historical evidence can be collected to make such a conclusion stati stically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "The question whether l is a universal constant, a constant within any one language but possibly differing between languages, or a variable, is easier to discuss. So far, indications are that k is about equal to 1/5000 years. Now this means that over the span of most historic evidence, exp(-kt) will be greater than about 0. 60.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "There is a great deal of scatter to be expected in the results because N exp(-kt) is an expectation, not an exact prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "There have been a number of studies of the exponent of exponential decay. All of them are too superficial to be conclusive (Z) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "An adequate study in any one language would have to meet several criteria which make it into a major research effort. A set of independent semantic atoms must be selected--selected prior to detailed study--and no atoms, however difficult, dropped without complete explanations (3). Then the history of each atom must be traced through the historical record to locate the lexical item covering the atom at each point in time. In reporting the study, all of this should be fully documented in detail. Each instance of decay can then be recognized and tallied. Statistical tests should be applied to see whether or not the model is satisfied and to estimate )~ . For example, if there are i00 semantic items T, there should be about one decay every 50 years uniformly spread through time. These things can be checked statistically. We hope that scholars will undertake definitive studies of this type for as many cases as possible (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Until the results of the kind of research just mentioned are available, the status of ~ is unsure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "We anticipate it will be recognized as a universal constant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "There The hypothetical mechanism advanced to explain lexical decay can be checked against history by case studies of semantic atoms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Each atom should show time periods when the principal word was nearly displaced. During these periods it is difficult to decide whether the old word or a new word is the principal cover. Usually the new word will pass away again, but sometimes it will displace the old word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "A very tentative guess based on a casual examination of one hundred current English words suggests there are about four very heavily threatened words per hundred. Since we can expect about one word to be decaying at this moment, we conclude that about three out of four times the old word survives. All of this needs to be verified or disproven in detailed studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "The statistical consequences of the model--the first order model described above--need to be explored. We cannot handle all possible situations, but the following examples should provide an adequate demonstration of technique so that any other problems which occur can be solved in the same manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decay Statistics", "sec_num": null }, { "text": "First, let us consider N languages deviating independently from a common parent which is not known to us.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -8", "sec_num": null }, { "text": "The following discussion is a bit more cumbersome than some alternative approaches, but it generalizes more easily.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -8", "sec_num": null }, { "text": "Let ~ be any set of the N languages and let P(a) be the probability that the given semantic atom is covered by the original lexical item in exactly the languages of set C~ New covering words are assumed to be different in each of the innovating languages. P(cc) is a function of time and satisfies the following differential equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -8", "sec_num": null }, { "text": "where i and j are languages, ~ and \u00a2 mean \"belongs to\" and \"does not belong to\" respectively, and ~) is the union of c~ and the set containing only the language j . Thus, P(c~) depends only on the value of Ic~l = n . We can recognize P(n) for n=Z, 3,...,N but P(0) and P(1) cannot be distinguished so we combine these into P' which is obtained by P' : 1 N(N-I) P(Z) .... ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kleinecke -8", "sec_num": null }, { "text": "As we explained in discussing semantic atoms, we feel there is no adequate observational data to which to apply these formulas for a conclusive test of their value. We have made a few experimental applications using the unsatisfactory data available in the literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "Numerically, the time estimates we obtained, which we will not quote here, do not differ a great deal from those obtained by considering pairs alone. This is to be expected if the phenomena are at all consistent. The value in the formulas derived above lies in the fact that they correctly combine the data from several pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "The first-order method does have one very important difficulty which appears almost immediately if we try to treat more than three languages. This difficulty is in the family tree of the languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "In the entire first-order development, we have implicitly used the concept of a tree. Languages go together as a \"common ancestor\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "until some point in time when they divide and become two separate languages. The tree is the first-order model of dialectation--it is known to be inadequate, at least in many situations. In spite of a century or so of studies, we simply do not understand how dialectation occurs. More study is greatly needed, especially in the construction of higher-order models, but the problem lies outside the scope of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "The difficulty with the tree rises in decay studies because only splitting is compatible with our statistical model. We have no alternative to constructing a family tree if we wish to apply the method outlined above. However, it seems to be easy to find examples which do not allow a tree to be constructed. Consider four languages; A , B, C and D . Suppose one semantic atom has the same cover in A and B, and another different cover in C and D . And at the same time, some other atom has one cover in A and C , and a different cover in B and D. We cannot fit this data into any family tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "A little more specifically in the Romance languages, we find that the same innovation with respect to Latin is shared by several or all the later languages. Some of this can be explained by the colloquial versus learned speech theory, but no family tree can be constructed to explain all the combinations of innovations. If we had an adequate explanation of the phenomena involved in these shared innovations, it is quite possible that we could assume Romance was the direct descendent of Imperial Latin without going back to Plautus or thereabouts, as seems to be required by the first order theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "A tentative beginning in this direction can be made by a secondorder theory based on the dynamic model of lexical influence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Criticism of First Order Theory", "sec_num": null }, { "text": "The imprecise model of semantic pressures we formed to explain lexical decay suggests the following second-order model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "For each semantic atom, we consider not only a covering lexical item as before, but also a potential covering item. The potential cover is the source of pressure against the cover. When the cover decays, it is replaced by the potential cover. Naturally we also assume that the potential cover decays and is replaced by a new potential cover.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "In the interest of simplicity and because we have no numerical data, we will assume both decays have the same constant k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "First, let us consider a single language. The situation at an atom can be of four types: (1) both the original cover and potential cover remain; (If) the original cover remains, but the potential cover has decayed; (III) the original cover has decayed and the potential cover has replaced it; (IV) the cover is now neither the original nor the potential cover.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "Let Pl and PII be the probability of the first two situations. The original cover remains in these two cases only so that the probability of it remaining is PI + PII = exp (-kt) which is exactly the same as in first-order theory.", "cite_spans": [ { "start": 172, "end": 177, "text": "(-kt)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "When the second-order theory is applied to N languages, the results are quite complicated. We divide the languages into four sets (~, B, Y, 6 depending on which situation holds in the language; in set 0% situation I holds, and so on. Then we have the basic differential Before we can actually apply the maximum liklihood technique to languages without known ancestors, we have to make Some further combinations because sets with I~]I = 1 can not be distinguished from those with I~I-o or those with I'~I-i from those with l~J-0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "Moreover, we cannot distinguish original covers from potential covers so that two sets T] and y must be combined with the same sets in the reverse order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "The general case is very complicated, so we restrict ourselves to two languages. We then observe that the covers are either the same or different. If they are the same, we have either I~ I = 2 and IYI = 161 = 0,or I~{I = 2 and IT]I = 161 = 0 Thus, the probability is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "[exp(-kt)] s + [exp(-kt)] s [i -exp(-kt)] s = exp(-Zkt) [i + (i exp(-kt)) s]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "which differs from the first order theory by the term in the square bracket.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "The simplest case where the second-order theory is really required is that of four languages. We will illustrate the results by one expression. If kss words are covered by two items both in two languages, k4 words by one item in all languages, k 8 by one item in three languages, k s by one item in two languages, and k' by no common items,then the expression to be solved for maximum liklihood is 4kss + 4k4 + 3k s +2k s where p = exp(-%t) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "This second-order theory is not satisfactory not only because it leads to very complex formulas, but it also seems to be qualitatively inadequate. The formula for splitting between two languages is not greatly modified except for very long times, and the change does not seem to be enough to account for data showing short times of division. It is hard to tell whether the formula for several languages including the quantity k2e is any help--so far we have no striking results to quote from its use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "A second-order theory where potential cover decayed at a different rate than the original cover might correct some of these defects, but we have no evidence upon which to estimate the decay rate in this case. It is much likely that a more elaborate mechanism must be postulated--it need not lead to more elaborate results. The model must be based on a kind of dialectation study which seems to be absent as yet from the literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-Order Lexical Decay", "sec_num": null }, { "text": "We have derived a number of formulas relating to the estimation of time depths by observations of lexical decay. The methods used can be applied to obtain many more similar formulas as required in studies of actual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "All of these formulas are based on models of lexical decay using the concept of semantic atoms and their lexical covers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "Kleinecke -18", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "is identified with a change in lexical cover. If the semantic atoms are sufficiently independent, the decay is a Poisson process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "Probably the most important practical conclusion is the result that any set of semantic atoms can be used to evaluate lexical decay provided the set is made up of atoms:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": ". far enough removed in meaning from one another to assure independence, which represent concepts assured to have been in existence throughout the time period being studied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(z)", "eq_num": "(3)" } ], "section": "Lexical decay", "sec_num": null }, { "text": "End Notes See Robert B. Lees, \"The Basis of Glottochronology\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "Language, 29. I13-27 (1953) .", "cite_spans": [ { "start": 14, "end": 27, "text": "I13-27 (1953)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "There is no outstanding study of this problem. Attempts to \"improve\" the test vocabulary by limiting it to meanings which have behaved well in earlier studies are methodologically disasterous because they bias the value of k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "This requirement is also intended to remove bias from the estimate of k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null }, { "text": "This is a matter of classical philological research independent of statistical syntheses made from the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical decay", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "FIGREF0": { "num": null, "text": "remains the problem of making a Poisson process a reasonable assumption. In other words, we need to describe some sort of mechanism which makes words slip off semantic atoms independently of how long they have been covering the atom, and at a constant rate per unit time, at least over short time intervals. Incidentally, since )~ is on the order of 1/5000 years, 50 years is a short time interval. Since the speakers of normal languages are not historians, the independence from history seems easy to accept. The constant rate is harder to accept. First of all we have to account for an identical figure in populations, literate and illiterate, and between a handful of speakers and half a billion speakers. The decay effect must be independent of the number of speakers, hence it must be operative at the level of the single isolated speaker. This is satisfactory since, by and large, the amount of speech reaching an individual does not seem to have changed much throughout history and does not vary much between cultures at the present day. But why does a speaker decide to change an occasional lexical item--about i~0 in his lifetime--and maintain the rest. The only hypothesis we have been able to construct is that all words are always under pressure--perhaps from several semantic \"directions\" at the same time. Most atoms resist change most of the time, but some set of accidents (all very real events at the sociological and psychological levels, but random accidents in our context) weakens a few, and the lexicon decays. In other words, there is a constant dynamic movement among secondary and incidental covers of the semantic atom which threaten the principal cover. Usually the threatening lexical items recede, but occasionally, in a random way, about once every five thousand years the principal cover is displaced and a lexical decay occurs.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "exp(-kt) loci (1 -exp(-kt) )N-I~1 This can be proven by induction on ]~I from IO~l : IN downward since IOC(~jl = I~I + I . Then d p(g) : _ ]CciXp(g ) + X(N-[C~[) exp(-kt) [al+ l(1-exp(-Xt))N-Ic~l-1 dt so that d