{ "paper_id": "P02-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:30:42.838574Z" }, "title": "Entropy Rate Constancy in Text", "authors": [ { "first": "Dmitriy", "middle": [], "last": "Genzel", "suffix": "", "affiliation": { "laboratory": "Brown Laboratory for Linguistic Information Processing", "institution": "Brown University Providence", "location": { "postCode": "02912", "region": "RI", "country": "USA" } }, "email": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "", "affiliation": { "laboratory": "Brown Laboratory for Linguistic Information Processing", "institution": "Brown University Providence", "location": { "postCode": "02912", "region": "RI", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a constancy rate principle governing language generation. We show that this principle implies that local measures of entropy (ignoring context) should increase with the sentence number. We demonstrate that this is indeed the case by measuring entropy in three different ways. We also show that this effect has both lexical (which words are used) and non-lexical (how the words are used) causes.", "pdf_parse": { "paper_id": "P02-1026", "_pdf_hash": "", "abstract": [ { "text": "We present a constancy rate principle governing language generation. We show that this principle implies that local measures of entropy (ignoring context) should increase with the sentence number. We demonstrate that this is indeed the case by measuring entropy in three different ways. We also show that this effect has both lexical (which words are used) and non-lexical (how the words are used) causes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It is well-known from Information Theory that the most efficient way to send information through noisy channels is at a constant rate. If humans try to communicate in the most efficient way, then they must obey this principle. The communication medium we examine in this paper is text, and we present some evidence that this principle holds here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Entropy is a measure of information first proposed by Shannon (1948) . Informally, entropy of a random variable is proportional to the difficulty of correctly guessing the value of this variable (when the distribution is known). Entropy is the highest when all values are equally probable, and is lowest (equal to 0) when one of the choices has probability of 1, i.e. deterministically known in advance.", "cite_spans": [ { "start": 54, "end": 68, "text": "Shannon (1948)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we are concerned with entropy of English as exhibited through written text, though these results can easily be extended to speech as well. The random variable we deal with is therefore a unit of text (a word, for our purposes 1 ) that a random person who has produced all the previous words in the text stream is likely to produce next. We have as many random variables as we have words in a text. The distributions of these variables are obviously different and depend on all previous words produced. We claim, however, that the entropy of these random variables is on average the same 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been work in the speech community inspired by this constancy rate principle. In speech, distortion of the audio signal is an extra source of uncertainty, and this principle can by applied in the following way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A given word in one speech context might be common, while in another context it might be rare. To keep the entropy rate constant over time, it would be necessary to take more time (i.e., pronounce more carefully) in less common situations. Aylett (1999) shows that this is indeed the case.", "cite_spans": [ { "start": 240, "end": 253, "text": "Aylett (1999)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "It has also been suggested that the principle of constant entropy rate agrees with biological evidence of how human language processing has evolved (Plotkin and Nowak, 2000) . Kontoyiannis (1996) also reports results on 5 consecutive blocks of characters from the works of Jane Austen which are in agreement with our principle and, in particular, with its corollary as derived in the following section.", "cite_spans": [ { "start": 148, "end": 173, "text": "(Plotkin and Nowak, 2000)", "ref_id": "BIBREF7" }, { "start": 176, "end": 195, "text": "Kontoyiannis (1996)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Let {X i }, i = 1 . . . n be a sequence of random variables, with X i corresponding to word w i in the corpus. Let us consider i to be fixed. The random variable we are interested in is Y i , a random variable that has the same distribution as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "X i |X 1 = w 1 , . . . , X i\u22121 = w i\u22121 for some fixed words w 1 . . . w i\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "For each word w i there will be some word w j , (j \u2264 i) which is the starting word of the sentence w i belongs to. We will combine random variables X 1 . . . X i\u22121 into two sets. The first, which we call C i (for context), contains X 1 through X j\u22121 , i.e. all the words from the preceding sentences. The remaining set, which we call L i (for local), will contain words X j through X i\u22121 . Both L i and C i could be empty sets. We can now write our variable", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "Y i as X i |C i , L i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "Our claim is that the entropy of Y i , H(Y i ) stays constant for all i. By the definition of relative mutual information between X i and C i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "H(Y i ) = H(X i |C i , L i ) = H(X i |L i ) \u2212 I(X i |C i , L i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "where the last term is the mutual information between the word and context given the sentence. As i increases, so does the set C i . L i , on the other hand, increases until we reach the end of the sentence, and then becomes small again. Intuitively, we expect the mutual information at, say, word k of each sentence (where L i has the same size for all i) to increase as the sentence number is increasing. By our hypothesis we then expect H(X i |L i ) to increase with the sentence number as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "Current techniques are not very good at estimating H(Y i ), because we do not have a very good model of context, since this model must be mostly semantic in nature. We have shown, however, that if we can instead estimate H(X i |L i ) and show that it increases with the sentence number, we will provide evidence to support the constancy rate principle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "The latter expression is much easier to estimate, because it involves only words from the beginning of the sentence whose relationship is largely local and can be successfully captured through something as simple as an n-gram model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "We are only interested in the mean value of the H(X j |L j ) for w j \u2208 S i , where S i is the ith sentence. This number is equal to 1 |S i | H(S i ), which reduces the problem to the one of estimating the entropy of a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "We use three different ways to estimate the entropy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "\u2022 Estimate H(S i ) using an n-gram probabilistic model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "\u2022 Estimate H(S i ) using a probabilistic model induced by a statistical parser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "\u2022 Estimate H(X i ) directly, using a non-parametric estimator. We estimate the entropy for the beginning of each sentence. This approach estimates", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "H(X i ), not H(X i |L i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "i.e. ignores not only the context, but also the local syntactic information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "N-gram models make the simplifying assumption that the current word depends on a constant number of the preceding words (we use three). The probability model for sentence S thus looks as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram", "sec_num": "4.1" }, { "text": "P (S) = P (w 1 )P (w 2 |w 1 )P (w 3 |w 2 w 1 ) \u00d7 n i=4 P (w n |w n\u22121 w n\u22122 w n\u22123 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram", "sec_num": "4.1" }, { "text": "To estimate the entropy of the sentence S, we compute log P (S). This is in fact an estimate of cross entropy between our model and true distribution. Thus we are overestimating the entropy, but if we assume that the overestimation error is more or less uniform, we should still see our estimate increase as the sentence number increases. Penn Treebank corpus (Marcus et al., 1993 ) sections 0-20 were used for training, sections 21-24 for testing. Each article was treated as a separate text, results for each sentence number were grouped together, and the mean value reported on Figure 1 (dashed line). Since most articles are short, there are fewer sentences available for larger sentence numbers, thus results for large sentence numbers are less reliable.", "cite_spans": [ { "start": 360, "end": 380, "text": "(Marcus et al., 1993", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 581, "end": 589, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "N-gram", "sec_num": "4.1" }, { "text": "The trend is fairly obvious, especially for small sentence numbers: sentences (with no context used) get harder as sentence number increases, i.e. the probability of the sentence given the model decreases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-gram", "sec_num": "4.1" }, { "text": "We also computed the log-likelihood of the sentence using a statistical parser described in Charniak (2001) 3 . The probability model for sentence S with parse tree T is (roughly):", "cite_spans": [ { "start": 92, "end": 107, "text": "Charniak (2001)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Parser Model", "sec_num": "4.2" }, { "text": "P (S) = x\u2208T P (x|parents(x))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parser Model", "sec_num": "4.2" }, { "text": "where parents(x) are words which are parents of node x in the the tree T . This model takes into account syntactic information present in the sentence which the previous model does not. The entropy estimate is again log P (S). Overall, these estimates are lower (closer to the true entropy) in this model because the model is closer to the true probability distribution. The same corpus, training and testing sets were used. The results are reported on Figure 1 (solid line). The estimates are lower (better), but follow the same trend as the n-gram estimates.", "cite_spans": [], "ref_spans": [ { "start": 453, "end": 461, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Parser Model", "sec_num": "4.2" }, { "text": "Finally we compute the entropy using the estimator described in (Kontoyiannis et al., 1998) . The estimation is done as follows. Let T be our training corpus. Let S = {w 1 . . . w n } be the test sentence. We find the largest k \u2264 n, such that sequence of words w 1 . . . w k occurs in T . Then log S k is an estimate of the entropy at the word w 1 . We compute such estimates for many first sentences, second sentences, etc., and take the average.", "cite_spans": [ { "start": 64, "end": 91, "text": "(Kontoyiannis et al., 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Non-parametric Estimator", "sec_num": "4.3" }, { "text": "For this experiment we used 3 million words of the Wall Street Journal (year 1988) as the training set and 23 million words (full year 1987) as the testing set 4 . The results are shown on Figure 2 . They demonstrate the expected behavior, except for the strong abnormality on the second sentence. This abnormality is probably corpusspecific. For example, 1.5% of the second sentences in this corpus start with words \"the terms were not disclosed\", which makes such sentences easy to predict and decreases entropy.", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 197, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Non-parametric Estimator", "sec_num": "4.3" }, { "text": "We have shown that the entropy of a sentence (taken without context) tends to increase with the sentence number. We now examine the causes of this effect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causes of Entropy Increase", "sec_num": "4.4" }, { "text": "These causes may be split into two categories: lexical (which words are used) and non-lexical (how the words are used). If the effects are entirely lexical, we would expect the per-word entropy of the closed-class words not to increase with sentence number, since presumably the same set of words gets used in each sentence. For this experiment we use our n-gram estimator as described in Section 4.2. We evaluate the per-word entropy for nouns, verbs, determiners, and prepositions. The results are given in Figure 3 (solid lines) . The results indicate that entropy of the closed class words increases with sentence number, which presumably means that non-lexical effects (e.g. usage) are present. We also want to check for presence of lexical effects. It has been shown by Kuhn and Mohri (1990) that lexical effects can be easily captured by caching. In its simplest form, caching involves keeping track of words occurring in the previous sentences and assigning for each word w a caching probability P c (w) = C(w) w C(w) , where C(w) is the number of times w occurs in the previous sentences. This probability is then mixed with the regular probability (in our case -smoothed trigram) as follows: where \u03bb was picked to be 0.1. This new probability model is known to have lower entropy. More complex caching techniques are possible (Goodman, 2001 ), but are not necessary for this experiment.", "cite_spans": [ { "start": 776, "end": 797, "text": "Kuhn and Mohri (1990)", "ref_id": null }, { "start": 1014, "end": 1018, "text": "C(w)", "ref_id": null }, { "start": 1336, "end": 1350, "text": "(Goodman, 2001", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 509, "end": 531, "text": "Figure 3 (solid lines)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Causes of Entropy Increase", "sec_num": "4.4" }, { "text": "P mixed (w) = (1 \u2212 \u03bb)P ngram (w) + \u03bbP c (w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causes of Entropy Increase", "sec_num": "4.4" }, { "text": "Thus, if lexical effects are present, we expect the model that uses caching to provide lower entropy estimates. The results are given in Figure 3 (dashed lines) . We can see that caching gives a significant improvement for nouns and a small one for verbs, and gives no improvement for the closed-class parts of speech. This shows that lexical effects are present for the open-class parts of speech and (as we assumed in the previous experiment) are absent for the closed-class parts of speech. Since we have proven the presence of the non-lexical effects in the previous experiment, we can see that both lexical and non-lexical effects are present.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 160, "text": "Figure 3 (dashed lines)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Causes of Entropy Increase", "sec_num": "4.4" }, { "text": "We have proposed a fundamental principle of language generation, namely the entropy rate constancy principle. We have shown that entropy of the sentences taken without context increases with the sentence number, which is in agreement with the above principle. We have also examined the causes of this increase and shown that they are both lexical (primarily for open-class parts of speech) and non-lexical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "These results are interesting in their own right, and may have practical implications as well. In particular, they suggest that language modeling may be a fruitful way to approach issues of contextual influence in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "Of course, to some degree language-modeling caching work has always recognized this, but this is rather a crude use of context and does not address the issues which one normally thinks of when talking about context. We have seen, however, that entropy measurements can pick up much more subtle influences, as evidenced by the results for determiners and prepositions where we see no caching influence at all, but nevertheless observe increasing entropy as a function of sentence number. This suggests that such measurements may be able to pick up more obviously semantic contextual influences than simply the repeating words captured by caching models. For example, sentences will differ in how much useful contextual information they carry. Are there useful generalizations to be made? E.g., might the previous sentence always be the most useful, or, perhaps, for newspaper articles, the first sentence? Can these measurements detect such already established contextual relations as the given-new distinction? What about other pragmatic relations? All of these deserve further study. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "It may seem like an arbitrary choice, but a word is a natural unit of length, after all when one is asked to give the length of an essay one typically chooses the number of words as a measure.2 Strictly speaking, we want the cross-entropy between all words in the sentences number n and the true model of English to be the same for all n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This parser does not proceed in a strictly left-to-right fashion, but this is not very important since we estimate entropy for the whole sentence, rather than individual words", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is not the same training set as the one used in two previous experiments. For this experiment we needed a larger, but similar data set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to acknowledge the members of the Brown Laboratory for Linguistic Information Processing and particularly Mark Johnson for many useful discussions. Also thanks to Daniel Jurafsky who early on suggested the interpretation of our data that we present here. This research has been supported in part by NSF grants IIS 0085940, IIS 0112435, and DGE 9870676.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic suprasegmentals: Relationships between redundancy, prosodic structure and syllabic duration", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Aylett", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ICPhS-99", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. P. Aylett. 1999. Stochastic suprasegmentals: Re- lationships between redundancy, prosodic struc- ture and syllabic duration. In Proceedings of ICPhS-99, San Francisco.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL-2001", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 2001. A maximum-entropy-inspired parser. In Proceedings of ACL-2001, Toulouse.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A bit of progress in language modeling", "authors": [ { "first": "J", "middle": [ "T" ], "last": "Goodman", "suffix": "" } ], "year": 2001, "venue": "Computer Speech and Language", "volume": "15", "issue": "", "pages": "403--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. T. Goodman. 2001. A bit of progress in lan- guage modeling. Computer Speech and Language, 15:403-434.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Nonparametric entropy estimation for stationary processes and random fields, with applications to English text", "authors": [ { "first": "I", "middle": [], "last": "Kontoyiannis", "suffix": "" }, { "first": "P", "middle": [ "H" ], "last": "Algoet", "suffix": "" }, { "first": "Yu", "middle": [ "M" ], "last": "Suhov", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Wyner", "suffix": "" } ], "year": 1998, "venue": "IEEE Trans. Inform. Theory", "volume": "44", "issue": "", "pages": "1319--1327", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Kontoyiannis, P. H. Algoet, Yu. M. Suhov, and A.J. Wyner. 1998. Nonparametric entropy esti- mation for stationary processes and random fields, with applications to English text. IEEE Trans. Inform. Theory, 44:1319-1327, May.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The complexity and entropy of literary styles", "authors": [ { "first": "I", "middle": [], "last": "Kontoyiannis", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Kontoyiannis. 1996. The complexity and en- tropy of literary styles. NSF Technical Report No. 97, Department of Statistics, Stanford University, June. [unpublished, can be found at the author's web page].", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A cache-based natural language model for speech reproduction", "authors": [ { "first": "R", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "R. De", "middle": [], "last": "Mori", "suffix": "" } ], "year": 1990, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "12", "issue": "6", "pages": "570--583", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kuhn and R. De Mori. 1990. A cache-based natural language model for speech reproduction. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 12(6):570-583.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Building a large annotated corpus of English: the Penn treebank", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. P. Marcus, B. Santorini, and M. A. Marcin- kiewicz. 1993. Building a large annotated cor- pus of English: the Penn treebank. Computational Linguistics, 19:313-330.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language evolution and information theory", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Plotkin", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Nowak", "suffix": "" } ], "year": 2000, "venue": "Journal of Theoretical Biology", "volume": "", "issue": "", "pages": "147--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. B. Plotkin and M. A. Nowak. 2000. Language evolution and information theory. Journal of The- oretical Biology, pages 147-159.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A mathematical theory of communication", "authors": [ { "first": "C", "middle": [ "E" ], "last": "Shannon", "suffix": "" } ], "year": 1948, "venue": "The Bell System Technical Journal", "volume": "27", "issue": "", "pages": "623--656", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Jour- nal, 27:379-423, 623-656, July, October.", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "N-gram and parser estimates of entropy (in bits per word)" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Non-parametric estimate of entropy" }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Comparing Parts of Speech" } } } }