{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:07.498774Z" }, "title": "Emergent Language Generalization and Acquisition Speed are not tied to Compositionality", "authors": [ { "first": "Eugene", "middle": [], "last": "Kharitonov", "suffix": "", "affiliation": {}, "email": "kharitinov@fb.com" }, { "first": "Facebook", "middle": [], "last": "Ai", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "", "affiliation": {}, "email": "mbaroni@fb.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Studies of discrete languages emerging when neural agents communicate to solve a joint task often look for evidence of compositional structure. This stems for the expectation that such a structure would allow languages to be acquired faster by the agents and enable them to generalize better. We argue that these beneficial properties are only loosely connected to compositionality. In two experiments, we demonstrate that, depending on the task, noncompositional languages might show equal, or better, generalization performance and acquisition speed than compositional ones. Further research in the area should be clearer about what benefits are expected from compositionality, and how the latter would lead to them.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Studies of discrete languages emerging when neural agents communicate to solve a joint task often look for evidence of compositional structure. This stems for the expectation that such a structure would allow languages to be acquired faster by the agents and enable them to generalize better. We argue that these beneficial properties are only loosely connected to compositionality. In two experiments, we demonstrate that, depending on the task, noncompositional languages might show equal, or better, generalization performance and acquisition speed than compositional ones. Further research in the area should be clearer about what benefits are expected from compositionality, and how the latter would lead to them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There is a recent spike of interest in studying the languages that emerge when artificial neural agents communicate to solve a common task (Foerster et al., 2016; Lazaridou et al., 2016; Havrylov and Titov, 2017) . A good portion of such studies looks for traces of compositional structure in those languages, or even tries to inject such structure into them (Kottur et al., 2017; Choi et al., 2018; Mordatch and Abbeel, 2018; Andreas, 2019; Cogswell et al., 2019; Li and Bowling, 2019; Resnick et al., 2019; Chaabouni et al., 2020) . Besides possibly providing insights on how compositionality emerged in natural language (Townsend et al., 2018) , this emphasis is justified by the idea that a compositional language has various desirable properties. In particular, compositional languages are expected to help agents to better generalize to new (composite) inputs (Kottur et al., 2017; , and to be faster to acquire (Cogswell et al., 2019; Li and Bowling, 2019; Ren et al., 2019) .", "cite_spans": [ { "start": 139, "end": 162, "text": "(Foerster et al., 2016;", "ref_id": "BIBREF7" }, { "start": 163, "end": 186, "text": "Lazaridou et al., 2016;", "ref_id": "BIBREF13" }, { "start": 187, "end": 212, "text": "Havrylov and Titov, 2017)", "ref_id": "BIBREF8" }, { "start": 359, "end": 380, "text": "(Kottur et al., 2017;", "ref_id": "BIBREF11" }, { "start": 381, "end": 399, "text": "Choi et al., 2018;", "ref_id": "BIBREF5" }, { "start": 400, "end": 426, "text": "Mordatch and Abbeel, 2018;", "ref_id": "BIBREF16" }, { "start": 427, "end": 441, "text": "Andreas, 2019;", "ref_id": "BIBREF0" }, { "start": 442, "end": 464, "text": "Cogswell et al., 2019;", "ref_id": "BIBREF6" }, { "start": 465, "end": 486, "text": "Li and Bowling, 2019;", "ref_id": "BIBREF14" }, { "start": 487, "end": 508, "text": "Resnick et al., 2019;", "ref_id": "BIBREF20" }, { "start": 509, "end": 532, "text": "Chaabouni et al., 2020)", "ref_id": null }, { "start": 623, "end": 646, "text": "(Townsend et al., 2018)", "ref_id": "BIBREF21" }, { "start": 866, "end": 887, "text": "(Kottur et al., 2017;", "ref_id": "BIBREF11" }, { "start": 918, "end": 941, "text": "(Cogswell et al., 2019;", "ref_id": "BIBREF6" }, { "start": 942, "end": 963, "text": "Li and Bowling, 2019;", "ref_id": "BIBREF14" }, { "start": 964, "end": 981, "text": "Ren et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We engage here with this ongoing research pursuit. We step back and reflect on the benefits that compositionality can bring to the emergent languages: if there is none, then it is unlikely that agents will develop compositional languages on their own. Indeed, several studies have shown that compositionality does not emerge naturally among neural agents (e.g. Kottur et al., 2017; Andreas, 2019) . On the other hand, understanding what benefits compositionality could bring to a language would help us in establishing the conditions for its emergence.", "cite_spans": [ { "start": 361, "end": 381, "text": "Kottur et al., 2017;", "ref_id": "BIBREF11" }, { "start": 382, "end": 396, "text": "Andreas, 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Compositionality is typically seen as a property of a language, independent of the task being considered. However, the task will likely influence properties such as generalization and ease of acquisition, that compositionality is expected to correlate with. Our experiments show that it is easy to construct tasks for which a compositional language is equally hard, or harder, to acquire and does not generalize better than a non-compositional one. Hence, language emergence researchers need to be clear about i) which benefits they expect from compositionality and ii) in which way compositionality would lead to those benefits in their setups. Otherwise, the agents will likely develop perfectly adequate communication systems that are not compositional.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before we proceed, let us clarify our definition of compositionality. Linguists and philosophers have extensively studied the topic for centuries (see Pagin and Westerst\u00e5hl, 2010a,b , for a thorough review). However, the standard definition that a language is compositional if the meaning of each expression in it is a function of the meaning of its parts and the rules to combine them is so general as to be vacuous for our purposes (under such definition, even the highly opaque languages we will introduce below are compositional, contra our intuitions).", "cite_spans": [ { "start": 151, "end": 181, "text": "Pagin and Westerst\u00e5hl, 2010a,b", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Operationalizing compositionality", "sec_num": "2" }, { "text": "In most current language emergence research, the input to language is composite in the sense that it consists of ensembles of elements. In this context, intuitively, a language is compositional if its symbols denote input elements in a disentangled way, so that they can be freely juxtaposed to refer to arbitrary combinations of them. More precisely, the following property might suffice for a limited but practical characterization of compositionality. Given a set of atomic input elements (for example, a set of independent attribute values), each atomic symbol should refer to one and only one input element, independently of the other symbols it co-occurs with. 1 A language where all symbols respect this property is compositional in the intuitive sense that, if we know the symbols that denote a set of input elements, we can assemble them (possibly, following some syntactic rules of the language) to refer to the ensemble of those input elements, irrespective of whether we have ever observed the relevant ensemble. Consider for example a world where inputs consist of two attributes, each taking a number of values. A language licensing only twocharacter sequences, where the character in the first position refers to the value of the first attribute, and that in the second position independently refers to the value of the second, would be compositional in our sense. On the other hand, a language that also licenses two-character sequences, but where both characters in a sequence are needed to decode the values of both the first and the second input attribute, would not be compositional. We will refer to the lack of symbol interdependence in denoting distinct input elements as na\u00efve compositionality. 2 We believe that na\u00efve compositionality captures the intuition behind explicit and implicit definitions of compositionality in emergent language research. For example, Kottur et al. (2017) deem non-compositional those languages that either use single symbols to refer to ensembles of input elements, or where the meaning of a symbol depends on the context in which it occurs. Havrylov and Titov (2017) looked for symbol-position combina-tions that encode a single concept in an image, as a sign of a compositional behavior. A na\u00efvely compositional language will maximize the two recently proposed compositionality measures of residual entropy (Resnick et al., 2019) and positional disentanglement (Chaabouni et al., 2020) .", "cite_spans": [ { "start": 1719, "end": 1720, "text": "2", "ref_id": null }, { "start": 1888, "end": 1908, "text": "Kottur et al. (2017)", "ref_id": "BIBREF11" }, { "start": 2363, "end": 2385, "text": "(Resnick et al., 2019)", "ref_id": "BIBREF20" }, { "start": 2417, "end": 2441, "text": "(Chaabouni et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Operationalizing compositionality", "sec_num": "2" }, { "text": "Na\u00efve compositionality is also closely related to the notion of disentanglement in representation learning (Bengio et al., 2013) . Interestingly, Locatello et al. 2018reported that disentanglement is not necessarily helpful for sample efficiency in downstream tasks, as had been previously argued. This resonates with our results below.", "cite_spans": [ { "start": 107, "end": 128, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Operationalizing compositionality", "sec_num": "2" }, { "text": "We base our experimental study on a one-episode one-direction communication game, as commonly done in the relevant literature (Lazaridou et al., 2016 Havrylov and Titov, 2017; . In this setup, we have two agents, Sender and Receiver. An input i is fed into Sender, in turn Sender produces a message m, which is consumed by Receiver. Receiver produces its output\u00f4. Comparing the output\u00f4 with the ground-truth output o provides a loss. We used EGG to implement the experiments. 3 In contrast to the language emergence scenario, we use a hard-coded Sender agent that produces a fixed, pre-defined language. This allows us to easily control the (na\u00efve) compositionality of the language and measure how it affects Receiver's performance. This setup is akin to the motivating example of Li and Bowling (2019) .", "cite_spans": [ { "start": 126, "end": 149, "text": "(Lazaridou et al., 2016", "ref_id": "BIBREF13" }, { "start": 150, "end": 175, "text": "Havrylov and Titov, 2017;", "ref_id": "BIBREF8" }, { "start": 476, "end": 477, "text": "3", "ref_id": null }, { "start": 781, "end": 802, "text": "Li and Bowling (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Communication Game", "sec_num": "3" }, { "text": "We study two Receiver's characteristics: (i) acquisition speed, measured as the number of epoch needed to achieve a fixed level of performance on training set, and (ii) generalization performance on held-out data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Communication Game", "sec_num": "3" }, { "text": "To demonstrate that compositionality of a language alone, detached from the task at hand, does not necessarily lead to higher generalization or faster acquisition speed, we design two experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "4" }, { "text": "The first experiment (attval) operates in an attribute-value world, similar to those of Kottur et al. (2017) ; . We fix two languages, one compositional and one not, and build three tasks: (i) \"easy\" for compositional language and \"hard\" for non-compositional; (ii) equally \"hard\" for both; (iii) \"hard\" for compositional language and \"easy\" for non-compositional language. Informally, we control the amount of computation needed by Receiver to perform a task starting from a language, where it can be equally hard to rely on compositional or non-compositional languages, or the answers could even be readily available in a non-compositional language.", "cite_spans": [ { "start": 88, "end": 108, "text": "Kottur et al. (2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "4" }, { "text": "In the second experiment (coordinates), we design a single task that is equally \"easy\" for an entire family of languages (parameterized by a continuous value), including compositional and noncompositional ones. The task is to transmit points on the 2D plane (thus, the input ensembles here are pairs of point coordinates). Here, we leverage the observation that a typical neural model has a linear output layer, for which it is equally easy to learn any rotation of the ground-truth outputs. Such rotation-group-invariance could play role in games where continuous image embeddings are used as input (Lazaridou et al., 2016; Havrylov and Titov, 2017) .", "cite_spans": [ { "start": 600, "end": 624, "text": "(Lazaridou et al., 2016;", "ref_id": "BIBREF13" }, { "start": 625, "end": 650, "text": "Havrylov and Titov, 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental setup", "sec_num": "4" }, { "text": "Input Sender's input i is a two-dimensional vector; each dimension encodes one of two attributes, each having", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "n v values: i \u2208 {1..n v } \u00d7 {1..n v }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "Languages We consider two languages, with messages of length two and vocabulary size n v . The first language, lang-identity, represents the in-puts as-is, by putting the value of the first (second) attribute in the first (second) position: (m 1 , m 2 ) \u2190 (i 1 , i 2 ). In the second language, lang-entangled, the first and the second positions are obtained as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "m j \u2190 (i 1 + (\u22121) j \u2022 i 2 ) mod n v , j \u2208 {1, 2}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "(1) Lang-identity and lang-entangled have exactly the same n 2 v utterances. While lang-identity is na\u00efvely compositional (one symbol encodes one attribute only), lang-entangled is not: each symbol of an utterance encodes equal amount of information about both attributes and both symbols are equally needed for decoding each attribute. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "Tasks We consider three tasks. In all of them, Receiver outputs two discrete values, o \u2208 {1..n v } \u00d7 {1..n v }. In task-identity, Receiver has to recover the original input of Sender, i. In the second task, task-linear, Receiver needs to output two values that are obtained as integer linear-modulo operations of the original input values:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "o \u2190 A \u2022 i + b mod n v .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "In the third task, task-entangled, we require Receiver to output", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "o j \u2190 (i 1 + (\u22121) j \u2022 i 2 ) mod n v .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "In this task, the output values derive from the same attribute transform applied in the lang-entangled language (Eq. 1). This languagetask pair mirrors the lang-identity/task-identity pair: each symbol encodes one output value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "Architecture and hyperparameters Receiver is implemented as an LSTM (Hochreiter and Schmidhuber, 1997) or a GRU cell (Cho et al., 2014) . Its output layer specifies two categorical distributions over n v values, encoding two output values. As a loss, we use the sum of per-output negative log-likelihoods. We used the following hyperparameters: n v = 31; hidden layer size 100; embedding size 50; batch size 32; 500 epochs training with Adam (learning rate 10 \u22122 ). Each configuration was run 20 times with different random seeds. A random 1/5 of the data is used as test set.", "cite_spans": [ { "start": 68, "end": 102, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF9" }, { "start": 117, "end": 135, "text": "(Cho et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Attval experiment", "sec_num": "4.1" }, { "text": "Input We sample points uniformly from the unit circle, centered at the origin: i \u2208 R 2 , i T i \u2264 1. We sample 10 3 points for training, 10 3 for testing. Languages We consider two languages with utterances of length two. In the first language, langcoordinate, Sender sequentially transmits both coordinates of a point: m j \u2190 i j . More precisely, the symbols refer to discretized coordinates from a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordinates experiment", "sec_num": "4.2" }, { "text": "n v \u00d7 n v square grid, covering [\u22121, 1] \u00d7 [\u22121, 1].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordinates experiment", "sec_num": "4.2" }, { "text": "This language is na\u00efvely compositional w.r.t. the coordinate-wise representation of the inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordinates experiment", "sec_num": "4.2" }, { "text": "We construct the second language, lang-rotated, in the following way. We start with langcoordinate, but apply a rotation of the plane by \u03c0/4 before feeding a point into Sender. 5 Effectively, this makes Sender \"use\" a rotated coordinate grid for encoding the coordinates. As a result of the rotation, lang-rotated ceases to be na\u00efvely compositional in the original (non-rotated) world. Each symbol of lang-rotated carries equal amounts of information about both coordinates of i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordinates experiment", "sec_num": "4.2" }, { "text": "Task Receiver has to recover the original (nonrotated) coordinates i of a point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordinates experiment", "sec_num": "4.2" }, { "text": "Architecture and loss Receiver is an LSTM with hidden size 100 and embedding size 50; n v is 100; batch size is 32; we use Adam with learning rate 10 \u22123 . As a loss, we use MSE. We run each configuration with 10 random seeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coordinates experiment", "sec_num": "4.2" }, { "text": "Attval experiment In Table 1 we provide the results of the attrval experiment, depending on language, task, and Receiver architecture. We report the number of epochs to achieve perfect accuracy on training set (top) and the accuracy on the holdout set after training (bottom).", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Consider the convergence speed first. For both Receiver architectures lang-identity converges considerably faster than lang-entangled. This agrees with the findings of Li and Bowling (2019) . However, in task-linear both languages demonstrate roughly the same convergence speed (the difference is not stat. sig.). In task-entangled, lang-entangled becomes more efficient to acquire than the na\u00efvely compositional lang-identity. Interestingly, the acquisition times of task-identity/lang-identity and task-entangled/lang-entangled are symmetrical.", "cite_spans": [ { "start": 168, "end": 189, "text": "Li and Bowling (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Next, consider the test accuracy of the same runs as above, measuring generalization to new attval combinations. We observe the same patterns: tasklinear is equally hard to generalize from both languages; lang-identity reaches high test accuracy in task-identity, while lang-entangled leads to equally high accuracy on task-entangled. In contrast, langidentity performs very poorly on task-entangled, just as lang-entangled does on task-identity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Coordinates experiment Figure 1 reports learning curves for train and test sets (cueing acquisition speed and generalization, respectively). There is little difference between the compositional and non-compositional languages, in either training or held-out loss trajectories. Note also that, evidently, the linear mapping required here to undo the nonna\u00efvely compositional transformation is easier for the networks than the non-linear operation we applied in the Attval experiment, pointing to the importance of taking the intrinsic biases of neural networks into account when designing language emergence experiments.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Our toy experiments with hand-coded languages make the possibly obvious but currently overlooked point that, in isolation from the target task, there is nothing special about a language being (na\u00efvely) compositional. A non-compositional language can be equally or faster to acquire than a compositional one, and it can provide the same or better generalization capabilities. Thus, if our goal is to let compositional languages emerge, we should be very clear about which characteristics of our setup should lead to its emergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "Our concern is illustrated by the recent findings of Chaabouni et al. (2020) , who observed that the degree of compositionality of emergent languages is not correlated with the generalization capabilities of the agents that rely on them to solve a task.", "cite_spans": [ { "start": 53, "end": 76, "text": "Chaabouni et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "Indeed, lacking any specific pressure towards developing a (na\u00efvely) compositional language, their agents were perfectly capable of developing generalizable but non-compositional communication systems. Our experiments provide a plausible explanation of their findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "A stronger conclusion is that perhaps we should altogether forget about compositionality as an end goal. The current emphasis on it might just be a misguided effect of our human-centric bias. We should instead directly concentrate on the properties we want agent languages to have, such as fast learning, transmission and generalization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "We leave the definition of what counts as an atomic symbol open: it could be a single character, a character bound to a certain position in a message string, a character sequence, etc.2 Na\u00efve in the sense that it is only appropriate when complex meanings are ensembles of atomic meanings. The definition breaks down when complex meanings result from functions that merge their components in different ways than simple ensembling, as is often the case in natural language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The code is available at https://github.com/ facebookresearch/EGG/tree/master/egg/ zoo/compositional_efficiency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that lang-entangled is still (non-na\u00efvely) compositional, in the sense that its messages can be predictably derived by applying Eq. 1 to the input pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Rotating by any angle (0, \u03c0/2) makes the language noncompositional; \u03c0/4 maximally entangles it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Measuring compositionality in representation learning", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.07181" ] }, "num": null, "urls": [], "raw_text": "Jacob Andreas. 2019. Measuring compositional- ity in representation learning. arXiv preprint arXiv:1902.07181.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Representation learning: A review and new perspectives", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "35", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 35(8).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Diane Bouchacourt, Emmanuel Dupoux, and Marco Baroni. 2020. Compositionality and generalization in emergent languages", "authors": [ { "first": "Rahma", "middle": [], "last": "Chaabouni", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Kharitonov", "suffix": "" } ], "year": null, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahma Chaabouni, Eugene Kharitonov, Diane Boucha- court, Emmanuel Dupoux, and Marco Baroni. 2020. Compositionality and generalization in emergent languages. In Proceedings of ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Anti-efficient encoding in emergent communication", "authors": [ { "first": "Rahma", "middle": [], "last": "Chaabouni", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Kharitonov", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6290--6300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. 2019. Anti-efficient en- coding in emergent communication. In Advances in Neural Information Processing Systems, pages 6290-6300.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.1078" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Compositional obverter communication learning from raw visual input", "authors": [ { "first": "Edward", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Nando", "middle": [], "last": "De Freitas", "suffix": "" } ], "year": 2018, "venue": "ICLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Choi, Angeliki Lazaridou, and Nando de Fre- itas. 2018. Compositional obverter communication learning from raw visual input. In ICLP.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Emergence of compositional language with deep generational transmission", "authors": [ { "first": "Michael", "middle": [], "last": "Cogswell", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.09067" ] }, "num": null, "urls": [], "raw_text": "Michael Cogswell, Jiasen Lu, Stefan Lee, Devi Parikh, and Dhruv Batra. 2019. Emergence of composi- tional language with deep generational transmission. arXiv preprint arXiv:1904.09067.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to communicate with deep multi-agent reinforcement learning", "authors": [ { "first": "Jakob", "middle": [], "last": "Foerster", "suffix": "" } ], "year": 2016, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In NIPS, Barcelona, Spain.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Emergence of language with multi-agent games: Learning to communicate with sequences of symbols", "authors": [ { "first": "Serhii", "middle": [], "last": "Havrylov", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2017, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Serhii Havrylov and Ivan Titov. 2017. Emergence of language with multi-agent games: Learning to com- municate with sequences of symbols. In NeurIPS.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "EGG: a toolkit for research on emergence of lanGuage in games", "authors": [ { "first": "Eugene", "middle": [], "last": "Kharitonov", "suffix": "" }, { "first": "Rahma", "middle": [], "last": "Chaabouni", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Bouchacourt", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Kharitonov, Rahma Chaabouni, Diane Boucha- court, and Marco Baroni. 2019. EGG: a toolkit for research on emergence of lanGuage in games. In EMNLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Natural language does not emerge 'naturally' in multi-agent dialog", "authors": [ { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "2962--2967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satwik Kottur, Jos\u00e9 Moura, Stefan Lee, and Dhruv Ba- tra. 2017. Natural language does not emerge 'nat- urally' in multi-agent dialog. In Proceedings of EMNLP, pages 2962-2967, Copenhagen, Denmark.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Emergence of linguistic communication from referential games with symbolic and pixel input", "authors": [ { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Tuyls", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2018, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguis- tic communication from referential games with sym- bolic and pixel input. In ICLR.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multi-agent cooperation and the emergence of (natural) language", "authors": [ { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Peysakhovich", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1612.07182" ] }, "num": null, "urls": [], "raw_text": "Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2016. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Ease-ofteaching and language structure from emergent communication", "authors": [ { "first": "Fushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bowling", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.02403" ] }, "num": null, "urls": [], "raw_text": "Fushan Li and Michael Bowling. 2019. Ease-of- teaching and language structure from emergent com- munication. arXiv preprint arXiv:1906.02403.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "authors": [ { "first": "Francesco", "middle": [], "last": "Locatello", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Lucic", "suffix": "" }, { "first": "Gunnar", "middle": [], "last": "R\u00e4tsch", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Bachem", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.12359" ] }, "num": null, "urls": [], "raw_text": "Francesco Locatello, Stefan Bauer, Mario Lucic, Gun- nar R\u00e4tsch, Sylvain Gelly, Bernhard Sch\u00f6lkopf, and Olivier Bachem. 2018. Challenging com- mon assumptions in the unsupervised learning of disentangled representations. arXiv preprint arXiv:1811.12359.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Emergence of grounded compositional language in multi-agent populations", "authors": [ { "first": "Igor", "middle": [], "last": "Mordatch", "suffix": "" }, { "first": "Pieter", "middle": [], "last": "Abbeel", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In AAAI.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Compositionality I: Definitions and variants", "authors": [ { "first": "Peter", "middle": [], "last": "Pagin", "suffix": "" }, { "first": "Dag", "middle": [], "last": "Westerst\u00e5hl", "suffix": "" } ], "year": 2010, "venue": "Philosophy Compass", "volume": "5", "issue": "3", "pages": "250--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Pagin and Dag Westerst\u00e5hl. 2010a. Composition- ality I: Definitions and variants. Philosophy Com- pass, 5(3):250-264.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Compositionality II: Arguments and problems", "authors": [ { "first": "Peter", "middle": [], "last": "Pagin", "suffix": "" }, { "first": "Dag", "middle": [], "last": "Westerst\u00e5hl", "suffix": "" } ], "year": 2010, "venue": "Philosophy Compass", "volume": "5", "issue": "3", "pages": "265--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Pagin and Dag Westerst\u00e5hl. 2010b. Composition- ality II: Arguments and problems. Philosophy Com- pass, 5(3):265-282.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Enhance the compositionality of emergent language by iterated learning", "authors": [ { "first": "Yi", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Shangmin", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Serhii", "middle": [], "last": "Havrylov", "suffix": "" }, { "first": "Shay", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Kirby", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the NeurIPS Emergent Communication Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Ren, Shangmin Guo, Serhii Havrylov, Shay Cohen, and Simon Kirby. 2019. Enhance the composition- ality of emergent language by iterated learning. In Proceedings of the NeurIPS Emergent Communica- tion Workshop.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Capacity, bandwidth, and compositionality in emergent language learning", "authors": [ { "first": "Cinjon", "middle": [], "last": "Resnick", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Foerster", "suffix": "" }, { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Dai", "suffix": "" }, { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.11424" ] }, "num": null, "urls": [], "raw_text": "Cinjon Resnick, Abhinav Gupta, Jakob Foerster, An- drew M Dai, and Kyunghyun Cho. 2019. Capacity, bandwidth, and compositionality in emergent lan- guage learning. arXiv preprint arXiv:1910.11424.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Compositionality in animals and humans", "authors": [ { "first": "Simon", "middle": [], "last": "Townsend", "suffix": "" }, { "first": "Sabrina", "middle": [], "last": "Engesser", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Stoll", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Zuberb\u00fchler", "suffix": "" }, { "first": "Balthasar", "middle": [], "last": "Bickel", "suffix": "" } ], "year": 2018, "venue": "PLOS Biology", "volume": "16", "issue": "8", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Townsend, Sabrina Engesser, Sabine Stoll, Klaus Zuberb\u00fchler, and Balthasar Bickel. 2018. Compositionality in animals and humans. PLOS Bi- ology, 16(8):1-7.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Coordinates experiment: log MSE vs. training epoch.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "content": "