{ "paper_id": "P08-1020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:34:21.085654Z" }, "title": "Trainable Generation of Big-Five Personality Styles through Data-driven Parameter Estimation", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Mairesse", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cambridge University Engineering Department Trumpington Street Cambridge", "location": { "postCode": "CB2 1PZ", "country": "United Kingdom" } }, "email": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sheffield Sheffield", "location": { "postCode": "S1 4DP", "country": "United Kingdom" } }, "email": "lynwalker@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous work on statistical language generation has primarily focused on grammaticality and naturalness, scoring generation possibilities according to a language model or user feedback. More recent work has investigated data-driven techniques for controlling linguistic style without overgeneration, by reproducing variation dimensions extracted from corpora. Another line of work has produced handcrafted rule-based systems to control specific stylistic dimensions, such as politeness and personality. This paper describes a novel approach that automatically learns to produce recognisable variation along a meaningful stylistic dimensionpersonality-without the computational cost incurred by overgeneration techniques. We present the first evaluation of a data-driven generation method that projects multiple personality traits simultaneously and on a continuous scale. We compare our performance to a rule-based generator in the same domain.", "pdf_parse": { "paper_id": "P08-1020", "_pdf_hash": "", "abstract": [ { "text": "Previous work on statistical language generation has primarily focused on grammaticality and naturalness, scoring generation possibilities according to a language model or user feedback. More recent work has investigated data-driven techniques for controlling linguistic style without overgeneration, by reproducing variation dimensions extracted from corpora. Another line of work has produced handcrafted rule-based systems to control specific stylistic dimensions, such as politeness and personality. This paper describes a novel approach that automatically learns to produce recognisable variation along a meaningful stylistic dimensionpersonality-without the computational cost incurred by overgeneration techniques. We present the first evaluation of a data-driven generation method that projects multiple personality traits simultaneously and on a continuous scale. We compare our performance to a rule-based generator in the same domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the last 20 years, statistical language models (SLMs) have been used successfully in many tasks in natural language processing, and the data available for modeling has steadily grown (Lapata and Keller, 2005) . Langkilde and Knight (1998) first applied SLMs to statistical natural language generation (SNLG) , showing that high quality paraphrases can be generated from an underspecified representation of meaning, by first applying a very underconstrained, rule-based overgeneration phase, whose outputs are then ranked by an SLM scoring phase. Since then, research in SNLG has explored a range of models for both dialogue and text generation.", "cite_spans": [ { "start": 188, "end": 213, "text": "(Lapata and Keller, 2005)", "ref_id": "BIBREF10" }, { "start": 216, "end": 243, "text": "Langkilde and Knight (1998)", "ref_id": "BIBREF8" }, { "start": 306, "end": 312, "text": "(SNLG)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One line of work has primarily focused on grammaticality and naturalness, scoring the overgener-ation phase with a SLM, and evaluating against a gold-standard corpus, using string or tree-match metrics (Langkilde-Geary, 2002; Bangalore and Rambow, 2000; Chambers and Allen, 2004; Belz, 2005; Isard et al., 2006) .", "cite_spans": [ { "start": 202, "end": 225, "text": "(Langkilde-Geary, 2002;", "ref_id": "BIBREF9" }, { "start": 226, "end": 253, "text": "Bangalore and Rambow, 2000;", "ref_id": "BIBREF1" }, { "start": 254, "end": 279, "text": "Chambers and Allen, 2004;", "ref_id": "BIBREF4" }, { "start": 280, "end": 291, "text": "Belz, 2005;", "ref_id": "BIBREF2" }, { "start": 292, "end": 311, "text": "Isard et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another thread investigates SNLG scoring models trained using higher-level linguistic features to replicate human judgments of utterance quality (Rambow et al., 2001; Nakatsu and White, 2006; Stent and Guo, 2005) . The error of these scoring models approaches the gold-standard human ranking with a relatively small training set.", "cite_spans": [ { "start": 145, "end": 166, "text": "(Rambow et al., 2001;", "ref_id": "BIBREF21" }, { "start": 167, "end": 191, "text": "Nakatsu and White, 2006;", "ref_id": "BIBREF15" }, { "start": 192, "end": 212, "text": "Stent and Guo, 2005)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A third SNLG approach eliminates the overgeneration phase (Paiva and Evans, 2005) . It applies factor analysis to a corpus exhibiting stylistic variation, and then learns which generation parameters to manipulate to correlate with factor measurements. The generator was shown to reproduce intended factor levels across several factors, thus modelling the stylistic variation as measured in the original corpus.", "cite_spans": [ { "start": 58, "end": 81, "text": "(Paiva and Evans, 2005)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is a generation technique that can target multiple stylistic effects simultaneously and over a continuous scale, controlling stylistic dimensions that are commonly understood and thus meaningful to users and application developers. Our intended applications are output utterances for intelligent training or intervention systems, video game characters, or virtual environment avatars. In previous work, we presented PERSON-AGE, a psychologically-informed rule-based generator based on the Big Five personality model, and we showed that PERSONAGE can project extreme personality on the extraversion scale, i.e. both introverted and extraverted personality types . We used the Big Five model to develop PERSONAGE for several reasons. First, the Big Five has been shown in psychology to ex- plain much of the variation in human perceptions of personality differences. Second, we believe that the adjectives used to develop the Big Five model provide an intuitive, meaningful definition of linguistic style. Table 1 shows some of the trait adjectives associated with the extremes of each Big Five trait. Third, there are many studies linking personality to linguistic variables (Pennebaker and King, 1999; Mehl et al., 2006, inter alia) . See for more detail.", "cite_spans": [ { "start": 1183, "end": 1210, "text": "(Pennebaker and King, 1999;", "ref_id": "BIBREF18" }, { "start": 1211, "end": 1241, "text": "Mehl et al., 2006, inter alia)", "ref_id": null } ], "ref_spans": [ { "start": 1013, "end": 1020, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we further test the utility of basing stylistic variation on the Big Five personality model. The Big Five traits are represented by scalar values that range from 1 to 7, with values normally distributed among humans. While our previous work targeted extreme values of individual traits, here we show that we can target multiple personality traits simultaneously and over the continuous scales of the Big Five model. Section 2 describes a novel parameter-estimation method that automatically learns to produce recognisable variation for all Big Five traits, without overgeneration, implemented in a new SNLG called PERSONAGE-PE. We show that PERSONAGE-PE generates targets for multiple personality dimensions, using linear and non-linear parameter estimation models to predict generation parameters directly from the scalar targets. Section 3.2 shows that humans accurately perceive the intended variation, and Section 3.3 compares PERSONAGE-PE (trained) with PERSONAGE (rule-based; . We delay a detailed discussion of related work to Section 4, where we summarize and discuss future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The data-driven parameter estimation method consists of a development phase and a generation phase (Section 3). The development phase:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Models", "sec_num": "2" }, { "text": "1. Uses a base generator to produce multiple utterances by randomly varying its parameters; 2. Collects human judgments rating the personality of each utterance; 3. Trains statistical models to predict the parameters from the personality judgments;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Models", "sec_num": "2" }, { "text": "7.00 6.00 5.00 4.00 3.00 2.00 1.00 4. Selects the best model for each parameter via crossvalidation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Models", "sec_num": "2" }, { "text": "We make minimal assumptions about the input to the generator to favor domain independence. The input is a speech act, a potential content pool that can be used to achieve that speech act, and five scalar personality parameters (1. . .7), specifying values for the continuous scalar dimensions of each trait in the Big Five model. See Table 1 . This requires a base generator that generates multiple outputs expressing the same input content by varying linguistic parameters related to the Big Five traits. We start with the PERSONAGE generator , which generates recommendations and comparisons of restaurants. We extend PER-SONAGE with new parameters for a total of 67 parameters in PERSONAGE-PE. See Table 2 . These parameters are derived from psychological studies identifying linguistic markers of the Big Five traits (Pennebaker and King, 1999; Mehl et al., 2006, inter alia) . As PERSONAGE's input parameters are domain-independent, most parameters range continuously between 0 and 1, while pragmatic marker insertion parameters are binary, except for the SUB-JECT IMPLICITNESS, STUTTERING and PRONOMI- Aggregate propositions with a relative clause, e.g. 'Chanpen Thai, which has great service, has nice decor' WITH CUE WORD Aggregate propositions using with, e.g. 'Chanpen Thai has great service, with nice decor' CONJUNCTION Join two propositions using a conjunction, or a comma if more than two propositions MERGE Merge the subject and verb of two propositions, e.g. 'Chanpen Thai has great service and nice decor' ALSO CUE WORD Join two propositions using also, e.g. 'Chanpen Thai has great service, also it has nice decor' CONTRAST -CUE WORD Contrast two propositions using while, but, however, on the other hand, e.g. 'While Chanpen Thai has great service, it has bad decor', 'Chanpen Thai has great service, but it has bad decor' JUSTIFY -CUE WORD Justify a proposition using because, since, so, e.g. 'Chanpen Thai is the best, because it has great service' CONCEDE -CUE WORD Concede a proposition using although, even if, but/though, e.g. 'Although Chanpen Thai has great service, it has bad decor', 'Chanpen Thai has great service, but it has bad decor though' MERGE WITH COMMA Restate a proposition by repeating only the object, e.g. 'Chanpen Thai has great service, nice waiters' CONJ. WITH ELLIPSIS Restate a proposition after replacing its object by an ellipsis, e.g. 'Chanpen Thai has . . . , it has great service' Pragmatic markers: SUBJECT IMPLICITNESS Make the restaurant implicit by moving the attribute to the subject, e.g. 'the service is great' NEGATION Negate a verb by replacing its modifier by its antonym, e.g. 'Chanpen Thai doesn't have bad service' SOFTENER HEDGES Insert syntactic elements (sort of, kind of, somewhat, quite, around, rather, I think that, it seems that, it seems to me that) to mitigate the strength of a proposition, e.g. 'Chanpen Thai has kind of great service' or 'It seems to me that Chanpen Thai has rather great service' EMPHASIZER HEDGES Insert syntactic elements (really, basically, actually, just) to strengthen a proposition, e.g. 'Chanpen Thai has really great service' or 'Basically, Chanpen .", "cite_spans": [ { "start": 821, "end": 848, "text": "(Pennebaker and King, 1999;", "ref_id": "BIBREF18" }, { "start": 849, "end": 879, "text": "Mehl et al., 2006, inter alia)", "ref_id": null } ], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 701, "end": 708, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Base Generator", "sec_num": "2.1" }, { "text": "NALIZATION parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Generator", "sec_num": "2.1" }, { "text": "We generate a sample of 160 random utterances by varying the parameters in Table 2 with a uniform distribution. This sample is intended to provide enough training material for estimating all 67 parameters for each personality dimension. Following , two expert judges (not the authors) familiar with the Big Five adjectives (Table 1) evaluate the personality of each utterance using the Ten-Item Personality Inventory (TIPI; Gosling et al., 2003) , and also judge the utterance's naturalness. Thus 11 judgments were made for each utterance for a total of 1760 judgments. The TIPI outputs a rating on a scale from 1 (low) to 7 (high) for each Big Five trait. The expert judgments are approximately nor-mally distributed; Figure 1 shows the distribution for agreeableness.", "cite_spans": [ { "start": 424, "end": 445, "text": "Gosling et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 323, "end": 332, "text": "(Table 1)", "ref_id": "TABREF1" }, { "start": 719, "end": 727, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Random Sample Generation and Expert Judgments", "sec_num": "2.2" }, { "text": "Training data is created for each generation parameter-i.e. the output variable-to train statistical models predicting the optimal parameter value from the target personality scores. The models are thus based on the simplifying assumption that the generation parameters are independent. Any personality trait whose correlation with a generation decision is below 0.1 is removed from the training data. This has the effect of removing parameters that do not correlate strongly with any trait, which are set to a constant default value at generation time. Since the input parameter values may not be satisfiable depending on the input content, the actual generation decisions made for each utterance are recorded. For example, the CONCESSIONS decision value is the actual number of concessions produced in the utterance. To ensure that the models' output can control the generator, the generation decision values are normalized to match the input range (0. . .1) of PERSONAGE-PE. Thus the dataset consists of 160 utterances and the corresponding generation decisions, each associated with 5 personality ratings averaged over both judges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical Model Training", "sec_num": "2.3" }, { "text": "Parameter estimation models are trained to predict either continuous (e.g. VERBOSITY) or binary (e.g. EXCLAMATION) generation decisions. We compare various learning algorithms using the Weka toolkit (with default values unless specified; Witten and Frank, 2005) . Continuous parameters are modeled with a linear regression model (LR), an M5' model tree (M5), and a model based on support vector machines with a linear kernel (SVM). As regression models can extrapolate beyond the [0, 1] interval, the output parameter values are truncated if needed-at generation time-before being sent to the base generator. Binary parameters are modeled using classifiers that predict whether the parameter is enabled or disabled. We test a Naive Bayes classifier (NB), a j48 decision tree (J48), a nearest-neighbor classifier using one neighbor (NN), a Java implementation of the RIPPER rule-based learner (JRIP), the AdaBoost boosting algorithm (ADA), and a support vector machines classifier with a linear kernel (SVM).", "cite_spans": [ { "start": 238, "end": 261, "text": "Witten and Frank, 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical Model Training", "sec_num": "2.3" }, { "text": "Figures 2, 3 and 4 show the models learned for the EXCLAMATION (binary), STUTTERING (continuous), and CONTENT POLARITY (continuous) parameters in Table 2 . The models predict generation parameters from input personality scores; note that", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 153, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Statistical Model Training", "sec_num": "2.3" }, { "text": "Class Weight -------------------if extraversion > 6.42 then 1 else 0 1.81 if extraversion > 4.42 then 1 else 0 0.38 if extraversion <= 6.58 then 1 else 0 0.22 if extraversion > 4.71 then 1 else 0 0.28 if agreeableness > 5.13 then 1 else 0 0.42 if extraversion <= 6.58 then 1 else 0 0.14 if extraversion > 4.79 then 1 else 0 0.19 if extraversion <= 6.58 then 1 else 0 0.17 sometimes the best performing model is non-linear. Given input trait values, the AdaBoost model in Figure 2 outputs the class yielding the largest sum of weights for the rules returning that class. For example, the first rule of the EXCLAMATION model shows that an extraversion score above 6.42 out of 7 would increase the weight of the enabled class by 1.81. The fifth rule indicates that a target agreeableness above 5.13 would further increase the weight by .42. The STUTTERING model tree in Figure 4 lets us calculate that a low emotional stability (1.0) together with a neutral conscientiousness and openness to experience (4.0) yield a parameter value of .62 (see LM2), whereas a neutral emotional stability decreases the value down to .17. Figure 4 also shows how personality traits that do not affect the parameter are removed, i.e. emotional stability, conscientiousness and openness to experience are the traits that affect stuttering. The linear model in Figure 3 shows that agreeableness has a strong effect on the CONTENT POLARITY parameter (.97 weight), but emotional stability, conscientiousness and openness to experience also have an effect.", "cite_spans": [], "ref_spans": [ { "start": 471, "end": 477, "text": "Figure", "ref_id": null }, { "start": 867, "end": 875, "text": "Figure 4", "ref_id": null }, { "start": 1119, "end": 1127, "text": "Figure 4", "ref_id": null }, { "start": 1338, "end": 1346, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Condition", "sec_num": null }, { "text": "The final step of the development phase identifies the best performing model(s) for each generation parameter via cross-validation. For continuous pa- Table 5 : Example outputs controlled by the parameter estimation models for a comparison (#1) and a recommendation (#2), with the average judges' ratings (Rating) and naturalness (Nat). Ratings are on a scale from 1 to 7, with 1 = very low (e.g. neurotic or introvert) and 7 = very high on the dimension (e.g. emotionally stable or extraverted).", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Model Selection", "sec_num": "2.4" }, { "text": "The generation phase of our parameter estimation SNLG method consists of the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Experiment", "sec_num": "3" }, { "text": "1. Use the best performing models to predict parameter values from the desired personality scores; 2. Generate the output utterance using the predicted parameter values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Experiment", "sec_num": "3" }, { "text": "We then evaluate the output utterances using naive human judges to rate their perceived personality and naturalness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Experiment", "sec_num": "3" }, { "text": "Given the best performing model for each generation parameter, we generate 5 utterances for each of 5 recommendation and 5 comparison speech acts. Each utterance targets an extreme value for two traits (either 1 or 7 out of 7) and neutral values for the remaining three traits (4 out of 7). The goal is for each utterance to project multiple traits on a continuous scale. To generate a range of alternatives, a Gaussian noise with a standard deviation of 10% of the full scale is added to each target value.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Method", "sec_num": "3.1" }, { "text": "Subjects were 24 native English speakers (12 male and 12 female graduate students from a range of disciplines from both the U.K. and the U.S.). Subjects evaluate the naturalness and personality of each utterance using the TIPI (Gosling et al., 2003) . To limit the experiment's duration, only the two traits with extreme target values are evaluated for each utterance. Subjects thus answered 5 questions for 50 utterances, two from the TIPI for each extreme trait and one about naturalness (250 judgments in total per subject). Subjects were not told that the utterances were intended to manifest extreme trait values. Table 5 shows several sample outputs and the mean personality ratings from the human judges. For example, utterance 1.a projects a high extraversion through the insertion of an exclamation mark based on the model in Figure 2 , whereas utterance 2.a conveys introversion by beginning with the filled pause err. The same utterance also projects a low agreeableness by focusing on negative propositions, through a low CONTENT POLARITY parameter value as per the model in Figure 3 . This evaluation addresses a number of open questions discussed below.", "cite_spans": [ { "start": 227, "end": 249, "text": "(Gosling et al., 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 619, "end": 626, "text": "Table 5", "ref_id": null }, { "start": 835, "end": 843, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1087, "end": 1095, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Evaluation Method", "sec_num": "3.1" }, { "text": "Q1: Is the personality projected by models trained on ratings from a few expert judges recognised by a larger sample of naive judges? (Section 3.2) Q2: Can a combination of multiple traits within a single utterance be detected by naive judges? (Section 3.2) Q3: How does PERSONAGE-PE compare to PERSON-AGE, a psychologically-informed rule-based generator for projecting extreme personality? (Section 3.3) Q4: Does the parameter estimation SNLG method produce natural utterances? (Section 3.4) Table 6 shows that extraversion is the dimension modeled most accurately by the parameter estimation models, producing a .45 correlation with the subjects' ratings (p < .01). Emotional stability, agreeableness, and openness to experience ratings also correlate strongly with the target scores, with correlations of .39, .36 and .17 respectively (p < .01). Additionally, Table 6 shows that the magnitude of the correlation increases when considering the perception of a hypothetical average subject, i.e. smoothing individual variation by averaging the ratings over all 24 judges, producing a correlation r avg up to .80 for extraversion. These correlations are unexpectedly high; in corpus analyses, significant correlations as low as .05 to .10 are typically observed between personality and linguistic markers (Pennebaker and King, 1999; Mehl et al., 2006) .", "cite_spans": [ { "start": 1305, "end": 1332, "text": "(Pennebaker and King, 1999;", "ref_id": "BIBREF18" }, { "start": 1333, "end": 1351, "text": "Mehl et al., 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 493, "end": 500, "text": "Table 6", "ref_id": null }, { "start": 863, "end": 870, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Method", "sec_num": "3.1" }, { "text": "Conscientiousness is the only dimension whose ratings do not correlate with the target scores. The comparison with rule-based results in Section 3.3 suggests that this is not because conscientiousness cannot be exhibited in our domain or manifested in a single utterance, so perhaps this arises from differing perceptions of conscientiousness between the expert and naive judges. Table 6 : Pearson's correlation coefficient r and mean absolute error e between the target personality scores and the 480 judges' ratings (20 ratings per trait for 24 judges); r avg is the correlation between the personality scores and the average judges' ratings. Table 6 shows that the mean absolute error varies between 1.89 and 2.79 on a scale from 1 to 7. Such large errors result from the decision to ask judges to answer just the TIPI questions for the two traits that were the extreme targets (See Section 3.1), because the judges tend to use the whole scale, with approximately normally distributed ratings. This means that although the judges make distinctions leading to high correlations, they do so on a compressed scale. This explains the large correlations despite the magnitude of the absolute error. Table 7 shows results evaluating whether utterances targeting the extremes of a trait are perceived differently. The ratings differ significantly for all traits but conscientiousness (p \u2264 .001). Thus parameter estimation models can be used in applications that only require discrete binary variation. Table 7 : Average personality ratings for the utterances generated with the low and high target values for each trait on a scale from 1 to 7.", "cite_spans": [], "ref_spans": [ { "start": 380, "end": 387, "text": "Table 6", "ref_id": null }, { "start": 645, "end": 652, "text": "Table 6", "ref_id": null }, { "start": 1197, "end": 1204, "text": "Table 7", "ref_id": null }, { "start": 1498, "end": 1505, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Parameter Estimation Evaluation", "sec_num": "3.2" }, { "text": "It is important to emphasize that generation parameters were predicted based on 5 target personality values. Thus, the results show that individual traits are perceived even when utterances project other traits as well, confirming that the Big Five theory models independent dimensions and thus provides a useful and meaningful framework for modeling variation in language. Additionally, although we do not directly evaluate the perception of midrange values of personality target scores, the results suggest that mid-range personality is modeled correctly because the neutral target scores do not affect the perception of extreme traits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation Evaluation", "sec_num": "3.2" }, { "text": "PERSONAGE is a rule-based personality generator based on handcrafted parameter settings derived from psychological studies. show that this approach generates utterances that are perceptibly different along the extraversion dimension. Table 8 compares the mean ratings of the utterances generated by PERSONAGE-PE with ratings of 20 utterances generated by PERSONAGE for each extreme of each Big Five scale (40 for extraversion, resulting in 240 handcrafted utterances in total). Table 8 shows that the handcrafted parameter settings project a significantly more extreme personality for 6 traits out of 10. However, the learned parameter models for neuroticism, disagreeableness, unconscientiousness and openness to experience do not perform significantly worse than the handcrafted generator. These findings are promising as we discuss further in Section 4. Table 8 : Pair-wise comparison between the ratings of the utterances generated using PERSONAGE-PE with extreme target values (Learned Parameters), and the ratings for utterances generated with Mairesse and Walker's rulebased PERSONAGE generator, (Rule-based). Ratings are averaged over all judges.", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 241, "text": "Table 8", "ref_id": null }, { "start": 478, "end": 485, "text": "Table 8", "ref_id": null }, { "start": 857, "end": 864, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Comparison with Rule-Based Generation", "sec_num": "3.3" }, { "text": "The naive judges also evaluated the naturalness of the outputs of our trained models. Table 9 shows that the average naturalness is 3.98 out of 7, which is significantly lower (p < .05) than the naturalness of handcrafted and randomly generated utterances reported by . It is possible that the differences arise from judgments of utterances targeting multiple traits, or that the naive judges are more critical.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Naturalness Evaluation", "sec_num": "3.4" }, { "text": "Trait Rule-based Random Learned All 4.59 4.38 3.98 Table 9 : Average naturalness ratings for utterances generated using (1) PERSONAGE, the rule-based generator, (2) the random utterances (expert judges) and (3) the outputs of PERSONAGE-PE using the parameter estimation models (Learned, naive judges). The means differ significantly at the p < .05 level (two-tailed independent sample t-test).", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Naturalness Evaluation", "sec_num": "3.4" }, { "text": "We present a new method for generating linguistic variation projecting multiple personality traits continuously, by combining and extending previous research in statistical natural language generation (Paiva and Evans, 2005; Rambow et al., 2001; Isard et al., 2006; . While handcrafted rule-based approaches are limited to variation along a small number of discrete points (Hovy, 1988; Walker et al., 1997; Lester et al., 1997; Power et al., 2003; Cassell and Bickmore, 2003; Piwek, 2003; Rehm and Andr\u00e9, in press), we learn models that predict parameter values for any arbitrary value on the variation dimension scales. Additionally, our data-driven approach can be applied to any dimension that is meaningful to human judges, and it provides an elegant way to project multiple dimensions simultaneously, by including the relevant dimensions as features of the parameter models' training data. Isard et al. (2006) and also propose a personality generation method, in which a data-driven personality model selects the best utterance from a large candidate set. Isard et al.'s technique has not been evaluated, while Mairesse and Walker's overgenerate and score approach is inefficient. Paiva and Evans' technique does not overgenerate (2005), but it requires a search for the optimal generation decisions according to the learned models. Our approach does not require any search or overgeneration, as parameter estimation models predict the generation decisions directly from the target variation dimensions. This technique is therefore beneficial for real-time generation. Moreover the variation dimensions of Paiva and Evans' data-driven technique are extracted from a corpus: there is thus no guarantee that they can be easily interpreted by humans, and that they generalise to other corpora. Previous work has shown that modeling the relation between personality and language is far from trivial (Pennebaker and King, 1999; Argamon et al., 2005; Oberlander and Nowson, 2006; , suggesting that the control of personality is a harder problem than the control of data-driven variation dimensions.", "cite_spans": [ { "start": 201, "end": 224, "text": "(Paiva and Evans, 2005;", "ref_id": "BIBREF17" }, { "start": 225, "end": 245, "text": "Rambow et al., 2001;", "ref_id": "BIBREF21" }, { "start": 246, "end": 265, "text": "Isard et al., 2006;", "ref_id": "BIBREF7" }, { "start": 373, "end": 385, "text": "(Hovy, 1988;", "ref_id": "BIBREF6" }, { "start": 386, "end": 406, "text": "Walker et al., 1997;", "ref_id": "BIBREF24" }, { "start": 407, "end": 427, "text": "Lester et al., 1997;", "ref_id": "BIBREF11" }, { "start": 428, "end": 447, "text": "Power et al., 2003;", "ref_id": "BIBREF20" }, { "start": 448, "end": 475, "text": "Cassell and Bickmore, 2003;", "ref_id": "BIBREF3" }, { "start": 476, "end": 488, "text": "Piwek, 2003;", "ref_id": "BIBREF19" }, { "start": 489, "end": 497, "text": "Rehm and", "ref_id": "BIBREF22" }, { "start": 895, "end": 914, "text": "Isard et al. (2006)", "ref_id": "BIBREF7" }, { "start": 1900, "end": 1927, "text": "(Pennebaker and King, 1999;", "ref_id": "BIBREF18" }, { "start": 1928, "end": 1949, "text": "Argamon et al., 2005;", "ref_id": "BIBREF0" }, { "start": 1950, "end": 1978, "text": "Oberlander and Nowson, 2006;", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "We present the first human perceptual evaluation of a data-driven stylistic variation method. In terms of our research questions in Section 3.1, we show that models trained on expert judges to project multiple traits in a single utterance generate utterances whose personality is recognized by naive judges. There is only one other similar evaluation of an SNLG (Rambow et al., 2001) . Our models perform only slightly worse than a handcrafted rule-based generator in the same domain. These findings are promising as (1) parameter estimation models are able to target any combination of traits over the full range of the Big Five scales; (2) they do not benefit from psychological knowledge, i.e. they are trained on randomly generated utterances.", "cite_spans": [ { "start": 362, "end": 383, "text": "(Rambow et al., 2001)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "This work also has several limitations that should be addressed in future work. Even though the parameters of PERSONAGE-PE were suggested by psychological studies , some of them are not modeled successfully by our approach, and thus omitted from Tables 3 and 4 . This could be due to the relatively small development dataset size (160 utterances to optimize 67 parameters), or to the implementation of some parameters. The strong parameter-independence assumption could also be responsible, but we are not aware of any state of the art implementation for learning multiple dependent variables, and this approach could further aggravate data sparsity issues.", "cite_spans": [], "ref_spans": [ { "start": 246, "end": 260, "text": "Tables 3 and 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "In addition, it is unclear why PERSONAGE performs better for projecting extreme personality and produces more natural utterances, and why PERSONAGE-PE fails to project conscientiousness correctly. It might be possible to improve the parameter estimation models with a larger sample of random utterances at development time, or with additional extreme data generated using the rule-based approach. Such hybrid models are likely to perform better for extreme target scores, as they are trained on more uniformly distributed ratings (e.g. compared to the normal distribution in Figure 1) . In addition, we have only shown that personality can be expressed by information presentation speech-acts in the restaurant domain; future work should assess the extent to which the parameters derived from psychological findings are culture, domain, and speech act dependent.", "cite_spans": [], "ref_spans": [ { "start": 575, "end": 584, "text": "Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Lexical predictors of personality type", "authors": [ { "first": "S", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "S", "middle": [], "last": "Dhawle", "suffix": "" }, { "first": "M", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "J", "middle": [], "last": "Pennebaker", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Joint Annual Meeting of the Interface and the Classification Society of North America", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Argamon, S. Dhawle, M. Koppel, and J. Pennebaker. Lexical predictors of personality type. In Proceedings of the Joint Annual Meeting of the Interface and the Classification Society of North America, 2005.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Exploiting a probabilistic hierarchical model for generation", "authors": [ { "first": "S", "middle": [], "last": "Bangalore", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "42--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bangalore and O. Rambow. Exploiting a probabilistic hierarchical model for generation. In Proceedings of the 18th International Conference on Computational Linguistics (COLING), pages 42-48, 2000.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Corpus-driven generation of weather forecasts", "authors": [ { "first": "A", "middle": [], "last": "Belz", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 3rd Corpus Linguistics Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Belz. Corpus-driven generation of weather forecasts. In Proceedings of the 3rd Corpus Linguistics Confer- ence, 2005.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Negotiated collusion: Modeling social language and its relationship effects in intelligent agents. User Modeling and User-Adapted Interaction", "authors": [ { "first": "J", "middle": [], "last": "Cassell", "suffix": "" }, { "first": "T", "middle": [], "last": "Bickmore", "suffix": "" } ], "year": 2003, "venue": "", "volume": "13", "issue": "", "pages": "89--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cassell and T. Bickmore. Negotiated collusion: Mod- eling social language and its relationship effects in in- telligent agents. User Modeling and User-Adapted In- teraction, 13:89-132, 2003.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Stochastic language generation in a dialogue system: Toward a domain independent generator", "authors": [ { "first": "N", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "J", "middle": [], "last": "Allen", "suffix": "" } ], "year": 2004, "venue": "Proceedings 5th SIGdial Workshop on Discourse and Dialogue", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Chambers and J. Allen. Stochastic language genera- tion in a dialogue system: Toward a domain indepen- dent generator. In Proceedings 5th SIGdial Workshop on Discourse and Dialogue, 2004.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A very brief measure of the big five personality domains", "authors": [ { "first": "S", "middle": [ "D" ], "last": "Gosling", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Rentfrow", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Swann", "suffix": "" } ], "year": 2003, "venue": "Journal of Research in Personality", "volume": "37", "issue": "", "pages": "504--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. D. Gosling, P. J. Rentfrow, and W. B. Swann. A very brief measure of the big five personality domains. Journal of Research in Personality, 37:504-528, 2003.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generating Natural Language under Pragmatic Constraints", "authors": [ { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Hovy. Generating Natural Language under Pragmatic Constraints. Lawrence Erlbaum Associates, 1988.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Individuality and alignment in generated dialogues", "authors": [ { "first": "A", "middle": [], "last": "Isard", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockmann", "suffix": "" }, { "first": "J", "middle": [], "last": "Oberlander", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 4th International Natural Language Generation Conference (INLG)", "volume": "", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Isard, C. Brockmann, and J. Oberlander. Individuality and alignment in generated dialogues. In Proceedings of the 4th International Natural Language Generation Conference (INLG), pages 22-29, 2006.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generation that exploits corpus-based statistical knowledge", "authors": [ { "first": "I", "middle": [], "last": "Langkilde", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "704--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Langkilde and K. Knight. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 704-710, 1998.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An empirical verification of coverage and correctness for a general-purpose sentence generator", "authors": [ { "first": "I", "middle": [], "last": "Langkilde-Geary", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 1st International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Langkilde-Geary. An empirical verification of coverage and correctness for a general-purpose sentence genera- tor. In Proceedings of the 1st International Conference on Natural Language Generation, 2002.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Web-based models for natural language processing", "authors": [ { "first": "M", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "F", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2005, "venue": "ACM Transactions on Speech and Language Processing", "volume": "2", "issue": "", "pages": "1--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Lapata and F. Keller. Web-based models for natu- ral language processing. ACM Transactions on Speech and Language Processing, 2:1-31, 2005.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The persona effect: affective impact of animated pedagogical agents", "authors": [ { "first": "J", "middle": [], "last": "Lester", "suffix": "" }, { "first": "S", "middle": [], "last": "Converse", "suffix": "" }, { "first": "S", "middle": [], "last": "Kahler", "suffix": "" }, { "first": "S", "middle": [], "last": "Barlow", "suffix": "" }, { "first": "B", "middle": [], "last": "Stone", "suffix": "" }, { "first": "R", "middle": [], "last": "Bhogal", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the SIGCHI conference on Human factors in computing systems", "volume": "", "issue": "", "pages": "359--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lester, S. Converse, S. Kahler, S. Barlow, B. Stone, and R. Bhogal. The persona effect: affective impact of animated pedagogical agents. Proceedings of the SIGCHI conference on Human factors in computing systems, pages 359-366, 1997.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "PERSONAGE: Personality generation for dialogue", "authors": [ { "first": "F", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Walker", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "496--503", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Mairesse and M. A. Walker. PERSONAGE: Personal- ity generation for dialogue. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 496-503, 2007.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Using linguistic cues for the automatic recognition of personality in conversation and text", "authors": [ { "first": "F", "middle": [], "last": "Mairesse", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Walker", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Mehl", "suffix": "" }, { "first": "R", "middle": [ "K" ], "last": "Moore", "suffix": "" } ], "year": 2007, "venue": "Journal of Artificial Intelligence Research (JAIR)", "volume": "30", "issue": "", "pages": "457--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Mairesse, M. A. Walker, M. R. Mehl, and R. K. Moore. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of Artifi- cial Intelligence Research (JAIR), 30:457-500, 2007.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Personality in its natural habitat: Manifestations and implicit folk theories of personality in daily life", "authors": [ { "first": "M", "middle": [ "R" ], "last": "Mehl", "suffix": "" }, { "first": "S", "middle": [ "D" ], "last": "Gosling", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Pennebaker", "suffix": "" } ], "year": 2006, "venue": "Journal of Personality and Social Psychology", "volume": "90", "issue": "", "pages": "862--877", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. R. Mehl, S. D. Gosling, and J. W. Pennebaker. Person- ality in its natural habitat: Manifestations and implicit folk theories of personality in daily life. Journal of Personality and Social Psychology, 90:862-877, 2006.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to say it well: Reranking realizations by predicted synthesis quality", "authors": [ { "first": "C", "middle": [], "last": "Nakatsu", "suffix": "" }, { "first": "M", "middle": [], "last": "White", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1113--1120", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Nakatsu and M. White. Learning to say it well: Reranking realizations by predicted synthesis quality. In Proceedings of the 44th Annual Meeting of the As- sociation for Computational Linguistics (ACL), pages 1113-1120, 2006.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Whose thumb is it anyway? classifying author personality from weblog text", "authors": [ { "first": "J", "middle": [], "last": "Oberlander", "suffix": "" }, { "first": "S", "middle": [], "last": "Nowson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Oberlander and S. Nowson. Whose thumb is it any- way? classifying author personality from weblog text. In Proceedings of the 44th Annual Meeting of the As- sociation for Computational Linguistics (ACL), 2006.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Empirically-based control of natural language generation", "authors": [ { "first": "D", "middle": [ "S" ], "last": "Paiva", "suffix": "" }, { "first": "R", "middle": [], "last": "Evans", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "58--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. S. Paiva and R. Evans. Empirically-based control of natural language generation. In Proceedings of the 43rd Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 58-65, 2005.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Linguistic styles: Language use as an individual difference", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Pennebaker", "suffix": "" }, { "first": "L", "middle": [ "A" ], "last": "King", "suffix": "" } ], "year": 1999, "venue": "Journal of Personality and Social Psychology", "volume": "77", "issue": "", "pages": "1296--1312", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. W. Pennebaker and L. A. King. Linguistic styles: Lan- guage use as an individual difference. Journal of Per- sonality and Social Psychology, 77:1296-1312, 1999.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A flexible pragmatics-driven language generator for animated agents", "authors": [ { "first": "P", "middle": [], "last": "Piwek", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Piwek. A flexible pragmatics-driven language gener- ator for animated agents. In Proceedings of Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL), 2003.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Generating texts with style", "authors": [ { "first": "R", "middle": [], "last": "Power", "suffix": "" }, { "first": "D", "middle": [], "last": "Scott", "suffix": "" }, { "first": "N", "middle": [], "last": "Bouayad-Agha", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 4th International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Power, D. Scott, and N. Bouayad-Agha. Generating texts with style. In Proceedings of the 4th Interna- tional Conference on Intelligent Text Processing and Computational Linguistics, 2003.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Evaluating a trainable sentence planner for a spoken dialogue travel system", "authors": [ { "first": "O", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "M", "middle": [], "last": "Rogati", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Walker", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Rambow, M. Rogati, and M. A. Walker. Evaluating a trainable sentence planner for a spoken dialogue travel system. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL), 2001.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "From annotated multimodal corpora to simulated human-like behaviors", "authors": [ { "first": "M", "middle": [], "last": "Rehm", "suffix": "" }, { "first": "E", "middle": [], "last": "Andr\u00e9", "suffix": "" } ], "year": null, "venue": "Modeling Communication with Robots and Virtual Humans", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Rehm and E. Andr\u00e9. From annotated multi- modal corpora to simulated human-like behaviors. In I. Wachsmuth and G. Knoblich, editors, Model- ing Communication with Robots and Virtual Humans. Springer, Berlin, Heidelberg, in press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A new data-driven approach for multimedia presentation generation", "authors": [ { "first": "A", "middle": [], "last": "Stent", "suffix": "" }, { "first": "H", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2005, "venue": "Proc. Eu-roIMSA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stent and H. Guo. A new data-driven approach for multimedia presentation generation. In Proc. Eu- roIMSA, 2005.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improvising linguistic style: Social and affective bases for agent personality", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Walker", "suffix": "" }, { "first": "J", "middle": [ "E" ], "last": "Cahn", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Whittaker", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 1st Conference on Autonomous Agents", "volume": "", "issue": "", "pages": "96--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. A. Walker, J. E. Cahn, and S. J. Whittaker. Improvis- ing linguistic style: Social and affective bases for agent personality. In Proceedings of the 1st Conference on Autonomous Agents, pages 96-105, 1997.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Data Mining: Practical machine learning tools and techniques", "authors": [ { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "E", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. H. Witten and E. Frank. Data Mining: Practical ma- chine learning tools and techniques. Morgan Kauf- mann, San Francisco, CA, 2005.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Distribution of average agreeableness ratings from the 2 expert judges for 160 random utterances.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "AdaBoost model predicting the EXCLAMATION parameter. Given input trait values, the model outputs the class yielding the largest sum of weights for the rules returning that class. Class 0 = disabled, class 1 = enabled.(normalized) Content polarity = 0.054 -0.102 * (normalized) emotional stability + 0.970 * (normalized) agreeableness -0.110 * (normalized) conscientiousness + 0.013 * (normalized) openness to experience", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "SVM model with a linear kernel predicting the CONTENT POLARITY parameter.", "num": null }, "TABREF0": { "type_str": "table", "num": null, "text": "Trait High Low Extraversion warm, assertive, sociable, excitement seeking, active, spontaneous, optimistic, talkative shy, quiet, reserved, passive, solitary, moody Emotional stability calm, even-tempered, reliable, peaceful, confident neurotic, anxious, depressed, self-conscious Agreeableness trustworthy, considerate, friendly, generous, helpful unfriendly, selfish, suspicious, uncooperative, malicious Conscientiousness competent, disciplined, dutiful, achievement striving disorganised, impulsive, unreliable, forgetful Openness to experience creative, intellectual, curious, cultured, complex narrow-minded, conservative, ignorant, simple", "html": null, "content": "" }, "TABREF1": { "type_str": "table", "num": null, "text": "Example adjectives associated with extreme values of the Big Five trait scales.", "html": null, "content": "
" }, "TABREF4": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
" }, "TABREF5": { "type_str": "table", "num": null, "text": "Figure 4: M5' model tree predicting the STUTTERING parameter.", "html": null, "content": "
Conscientiousness
\u2264 3.875> 3.875
Stuttering =Emotional stability
-0.0136 * emotional stability
+ 0.0098 * conscientiousness\u2264 4.375> 4.375
+ 0.0063 * openness to experience
+ 0.0126
Stuttering =Stuttering =
-0.1531 * emotional stability-0.0142 * emotional stability
+ 0.004 * conscientiousness+ 0.004 * conscientiousness
+ 0.1122 * openness to experience+ 0.0076 * openness to experience
+ 0.3129+ 0.0576
Continuous parametersLRM5SVM
Content parameters:
VERBOSITY0.240.260.21
RESTATEMENTS0.140.140.04
REPETITIONS0.130.13 0.08
CONTENT POLARITY0.460.460.47
REPETITIONS POLARITY0.020.150.06
CONCESSIONS0.230.23 0.12
CONCESSIONS POLARITY-0.010.160.07
POLARISATION0.200.210.20
Syntactic template selection:
CLAIM COMPLEXITY0.100.330.26
CLAIM POLARITY0.040.040.05
Aggregation operations:
INFER -WITH CUE WORD0.030.030.01
INFER -ALSO CUE WORD0.100.10 0.06
JUSTIFY -SINCE CUE WORD0.030.070.05
JUSTIFY -SO CUE WORD0.070.070.04
JUSTIFY -PERIOD0.360.35 0.21
CONTRAST -PERIOD0.270.26 0.26
RESTATE -MERGE WITH COMMA0.180.180.09
CONCEDE -ALTHOUGH CUE WORD0.080.08 0.05
CONCEDE -EVEN IF CUE WORD0.050.050.03
Pragmatic markers:
SUBJECT IMPLICITNESS0.130.13 0.04
STUTTERING INSERTION0.160.230.17
PRONOMINALIZATION0.220.20 0.17
Lexical choice parameters:
LEXICAL FREQUENCY0.210.210.19
WORD LENGTH0.180.18 0.15
" }, "TABREF6": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
: Pearson's correlation between parameter model
predictions and continuous parameter values, for differ-
ent regression models. Parameters that do not correlate
with any trait are omitted. Aggregation operations are as-
sociated with a rhetorical relation (e.g. INFER). Results
are averaged over a 10-fold cross-validation.
rameters, Table 3 evaluates modeling accuracy by
comparing the correlations between the model's pre-
dictions and the actual parameter values in the test
folds. Table 4 reports results for binary parameter
classifiers, by comparing the F-measures of the en-
abled class. Best performing models are identified
in bold; parameters that do not correlate with any
trait or that produce a poor modeling accuracy are
omitted.
The CONTENT POLARITY parameter is modeled
" }, "TABREF7": { "type_str": "table", "num": null, "text": "Les Routiers and Radio Perfecto... You would probably appreciate them. Radio Perfecto is in the East Village with kind of acceptable food. Les Routiers is located in Manhattan. Its price is 41 dollars. Err... you would probably appreciate Trattoria Rustica, wouldn't you? It's in Manhattan, also it's an italian restaurant. It offers poor ambience, also it's quite costly.", "html": null, "content": "
: F-measure of the enabled class for classifica-
tion models of binary parameters. Parameters that do
not correlate with any trait are omitted. Results are av-
eraged over a 10-fold cross-validation. JRIP models are
not shown as they never perform best.
the most accurately, with the SVM model in Fig-
ure 3 producing a correlation of .47 with the true pa-
rameter values. Models of the PERIOD aggregation
operation also perform well, with a linear regression
model yielding a correlation of .36 when realizing
a justification, and .27 when contrasting two propo-
sitions. CLAIM COMPLEXITY and VERBOSITY are
also modeled successfully, with correlations of .33
and .26 using a model tree. The model tree control-
ling the STUTTERING parameter illustrated in Fig-
ure 4 produces a correlation of .23. For binary pa-
rameters, Table 4 shows that the Naive Bayes classi-
fier is generally the most accurate, with F-measures
of .40 for the IN-GROUP MARKER parameter, and
.32 for both the insertion of filled pauses (err) and
tag questions. The AdaBoost algorithm best predicts
the EXCLAMATION parameter, with an F-measure of
.38 for the model in Figure 2.
" }, "TABREF10": { "type_str": "table", "num": null, "text": "significant increase or decrease of the variation range over the average rule-based ratings (p < .05, two-tailed)", "html": null, "content": "
MethodRule-basedLearned parameters
TraitLow HighLowHigh
Extraversion2.965.983.69 \u20225.05 \u2022
Emotional stability3.295.963.754.75 \u2022
Agreeableness3.415.663.424.33 \u2022
Conscientiousness3.715.534.164.15 \u2022
Openness to experience2.894.213.71 \u20224.06
\u2022,\u2022
" } } } }