{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:45:28.693153Z" }, "title": "Adaptation of a Lexical Organization for Social Engineering Detection and Response Generation", "authors": [ { "first": "Archna", "middle": [], "last": "Bhatia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "abhatia@ihmc.us" }, { "first": "Adam", "middle": [], "last": "Dalton", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "adalton@ihmc.us" }, { "first": "Brodie", "middle": [], "last": "Mather", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "bmather@ihmc.us" }, { "first": "Sashank", "middle": [], "last": "Santhanam", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "" }, { "first": "Samira", "middle": [], "last": "Shaikh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "sshaikh2@uncc.edu" }, { "first": "Alan", "middle": [], "last": "Zemel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "azemel@albany.edu" }, { "first": "Tomek", "middle": [], "last": "Strzalkowski", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rensselaer Polytechnic Institute NY", "location": {} }, "email": "bdorr@ihmc.us" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a paradigm for extensible lexicon development based on Lexical Conceptual Structure to support social engineering detection and response generation. We leverage the central notions of ask (elicitation of behaviors such as providing access to money) and framing (risk/reward implied by the ask). We demonstrate improvements in ask/framing detection through refinements to our lexical organization and show that response generation qualitatively improves as ask/framing detection performance improves. The paradigm presents a systematic and efficient approach to resource adaptation for improved task-specific performance.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a paradigm for extensible lexicon development based on Lexical Conceptual Structure to support social engineering detection and response generation. We leverage the central notions of ask (elicitation of behaviors such as providing access to money) and framing (risk/reward implied by the ask). We demonstrate improvements in ask/framing detection through refinements to our lexical organization and show that response generation qualitatively improves as ask/framing detection performance improves. The paradigm presents a systematic and efficient approach to resource adaptation for improved task-specific performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Social engineering (SE) refers to sophisticated use of deception to manipulate individuals into divulging confidential or personal information for fraudulent purposes. Standard cybersecurity defenses are ineffective because attackers attempt to exploit humans rather than system vulnerabilities. Accordingly, we have built a user alter-ego application that detects and engages a potential attacker in ways that expose their identity and intentions. Our system relies on a paradigm for extensible lexicon development that leverages the central notion of ask, i.e., elicitation of behaviors such as PERFORM (e.g., clicking a link) or GIVE (e.g., providing access to money). This paradigm also enables detection of risk/reward (or LOSE/GAIN) implied by an ask, which we call framing (e.g., lose your job, get a raise). These elements are used for countering attacks through bot-produced responses and actions. The system is tested in an email environment, but is applicable to other forms of online communications, e.g., SMS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Ask Framing (a) It is a pleasure to inform you that you have won 1.7Eu. Contact me. (jw11@example.com) PERFORM contact (jw11@...) More formally, an ask is a statement that elicits a behavior from a potential victim, e.g., please buy me a gift card. Although asks are not always explicitly stated (Drew and Couper-Kuhlen, 2014; Zemel, 2017) , we discern these through navigation of semantically classified verbs. The task of ask detection specifically is targeted event detection based on parsing and/or Semantic Role Labeling (SRL), to identify semantic class triggers (Dorr et al., 2020) . Framing sets the stage for the ask, i.e., the purported threat (LOSE) or benefit (GAIN) that the social engineer wants the potential victim to believe will obtain through compliance or lack thereof. It should be noted that there is no one-to-one ratio between ask and framing in the ask/framing detection output. Given the content, there may be none, one or more asks and/or framings in the output. Our lexical organization is based on Lexical Conceptual Structure (LCS), a formalism that supports resource construction and extensions to new applications such as SE detection and response generation. Semantic classes of verbs with similar meanings (give, donate) are readily augmented through adoption of the STYLUS variant of LCS (Dorr and Voss, 2018) and (Dorr and Olsen, 2018) . We derive LCS+ from asks/framings and employ CATVAR (Habash and Dorr, 2003) to relate word variants (e.g., reference and refer). Table 1 illustrates LCS+ Ask/Framing output for three (presumed) SE emails: two PERFORM asks and one GIVE ask. 1 Parentheses () refer to ask arguments, often a link that the potential victim might choose to click. Ask/framing outputs are provided to downstream response generation. For example, a possible response for Table 1(a) is I will contact asap. A comparison of LCS+ to two related resources shows that our lexical organization supports refinements, improves ask/framing detection and top ask identification, and yields qualitative improvements in response generation. LCS+ is 1 To view our system's ask/framing outputs on a larger dataset (the same set of emails which were also used for ground truth (GT) creation described below), refer to https://social-threats.github.io/ panacea-ask-detection/data/case7LCS+ AskDetectionOutput.txt.", "cite_spans": [ { "start": 296, "end": 326, "text": "(Drew and Couper-Kuhlen, 2014;", "ref_id": "BIBREF23" }, { "start": 327, "end": 339, "text": "Zemel, 2017)", "ref_id": "BIBREF46" }, { "start": 569, "end": 588, "text": "(Dorr et al., 2020)", "ref_id": "BIBREF20" }, { "start": 1323, "end": 1344, "text": "(Dorr and Voss, 2018)", "ref_id": "BIBREF48" }, { "start": 1349, "end": 1371, "text": "(Dorr and Olsen, 2018)", "ref_id": "BIBREF19" }, { "start": 1426, "end": 1449, "text": "(Habash and Dorr, 2003)", "ref_id": "BIBREF49" } ], "ref_spans": [ { "start": 1503, "end": 1510, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Email", "sec_num": null }, { "text": "deployed in a SE detection and response generation system. Even though LCS+ is designed for the SE domain, the approach to development of LCS+ described in this paper serves as a guideline for developing similar lexica for other domains. Correspondingly, even though development of LCS+ is one of the contributions of this paper, the main contribution is not this resource but the systematic and efficient approach to resource adaptation for improved taskspecific performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GAIN", "sec_num": null }, { "text": "In our experiments described in Section 3., we compare LCS+, our lexical resource we developed for the SE domain, against two strong baselines: STYLUS and Thesaurus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "STYLUS baseline: As one of the baselines for our experiments, we leverage a publicly available resource STY-LUS that is based on Lexical Conceptual Structure (LCS) (Dorr and Voss, 2018) and (Dorr and Olsen, 2018) . The LCS representation is an underlying representation of spatial and motion predicates (Jackendoff, 1983; Jackendoff, 1990; Dorr, 1993) , such as fill and go, and their metaphorical extensions, e.g., temporal (the hour flew by) and possessional (he sold the book). 2 Prior work (Jackendoff, 1996; Levin, 1993; Olsen, 1994; Chang et al., 2007; Chang et al., 2010; Kipper et al., 2007; Palmer et al., 2017) has suggested that there is a close relation between underlying lexical-semantic structures of verbs and nominal predicates and their syntactic argument structure. We leverage this relationship to extend the existing STYLUS verb classes for the resource adaptation to SE domain through creation of LCS+ which is discussed below. For our STYLUS verb list, we group verbs into four lists based on asks (PERFORM, GIVE) and framings (LOSE, GAIN). The STYLUS verb list can be accessed here: https://social-threats.github. io/panacea-ask-detection/resources/ original_lcs_classes_based_verbsList. txt. Examples of this classificationare shown below (with total verb count in parentheses):", "cite_spans": [ { "start": 164, "end": 185, "text": "(Dorr and Voss, 2018)", "ref_id": "BIBREF48" }, { "start": 190, "end": 212, "text": "(Dorr and Olsen, 2018)", "ref_id": "BIBREF19" }, { "start": 303, "end": 321, "text": "(Jackendoff, 1983;", "ref_id": "BIBREF29" }, { "start": 322, "end": 339, "text": "Jackendoff, 1990;", "ref_id": "BIBREF30" }, { "start": 340, "end": 351, "text": "Dorr, 1993)", "ref_id": "BIBREF21" }, { "start": 494, "end": 512, "text": "(Jackendoff, 1996;", "ref_id": "BIBREF31" }, { "start": 513, "end": 525, "text": "Levin, 1993;", "ref_id": "BIBREF35" }, { "start": 526, "end": 538, "text": "Olsen, 1994;", "ref_id": "BIBREF39" }, { "start": 539, "end": 558, "text": "Chang et al., 2007;", "ref_id": "BIBREF17" }, { "start": 559, "end": 578, "text": "Chang et al., 2010;", "ref_id": "BIBREF18" }, { "start": 579, "end": 599, "text": "Kipper et al., 2007;", "ref_id": "BIBREF33" }, { "start": 600, "end": 620, "text": "Palmer et al., 2017)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 PERFORM (214): remove, redeem, refer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 GIVE (81): administer, contribute, donate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 LOSE (615): penalize, stick, punish, ruin", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 GAIN (49): accept, earn, grab, win Assignment of verbs to these four ask/framing categories is determined by a computational linguist, with approximately a person-day of human effort. Identification of genre-specific verbs is achieved through analysis of 46 emails (406 clauses) after parsing/POS/SRL is applied. As an example, the verb position (Class 9.1) and the verb delete (Class 10.1) both have an underlying placement or existence component with an affected object (e.g., the cursor in position your cursor or the account in delete your account), coupled with a location (e.g., here or from the system). Accordingly, Put verbs in Class 9.1 and Remove verbs in Class 10.1 are grouped together and aligned with a PERFORM ask (as are many other classes with similar properties: Banish, Steal, Cheat, Bring, Obtain, etc.). Analogously, verbs in the Send and Give classes are aligned with a GIVE ask, as all verbs in these two classes have a sender/giver and a recipient. Lexical assignment of framings is handled similarly, i.e., verbs are aligned with LOSE and GAIN according to their argument structures and components of meaning. It is assumed that the potential victim of a SE attack serves to lose or gain something, depending on non-compliance or compliance with a social engineer's ask. As an example, the framing associated with the verb losing (Class 10.5) in Read carefully to avoid losing account access indicates the risk of losing access to a service; Class 10.5 is thus aligned with LOSE. Analogously, the verb win (Class 13.5.1) in You have won 1.7M Eu. is an alluring statement with a purported gain to the potential victim; thus Class 13.5.1 is aligned with GAIN. In short, verbs in classes associated with LOSE imply negative consequences (Steal, Impact by Contact, Destroy, Leave) whereas verbs in classes associated with GAIN imply positive consequences (Get, Obtain). Some classes are associated with more than one ask/framing category: Steal (Class 10.5) and Cheat (Class 10.6) are aligned with both PERFORM (redeem, free) and LOSE (forfeit, deplete). Such distinctions are not captured in the lexical resource, but are algorithmically resolved during ask/framing detection, where contextual clues provide disambiguation capability. For example, Redeem coupon is a directive with an implicit request to click a link, i.e., a PERFORM. By contrast, Avoid losing account access is a statement of risk, i.e., a LOSE. The focus here is not on the processes necessary for distinguishing between these contextually-determined senses, but on the organizing principles underlying both, in support of application-oriented resource construction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "LCS+ resource for SE adapted from STYLUS: Setting disambiguation aside, resource improvements are still necessary for the SE domain because, due to its size and coverage, STYLUS is likely to predict a large number of both true and false positives during ask/framing detection. To reduce false positives without taking a hit to true positives, we leverage an important property of the LCS paradigm: its extensible organizational structure wherein similar verbs are grouped together. With just one person-day of effort by two computational linguists (authors on the paper; the algorithm developer, also an author, was not involved in this process), a new lexical organization, referred to as \"LCS+\" is derived from STY-LUS, taken together with asks/framings from a set of 46 malicious/legitimate emails. 3 These emails are a random subset of 1000+ emails (69 malicious and 938 legitimate) sent from an external red team to five volunteers in a large government agency using social engineering tactics. Verbs from these emails are tied into particular LCS classes with matching semantic peers and argument structures. These emails are proprietary but the resulting lexicon is released here: https://social-threats.github. io/panacea-ask-detection/resources/ lcsPlus_classes_based_verbsList.txt. Two categories (PERFORM and LOSE) are modified from the adaptation of LCS+ beyond those in STYLUS:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 PERFORM (6 del, 44 added): copy, notify", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 GIVE (no changes)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 LOSE (174 del, 11 added): forget, surrender", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 GAIN (no changes) Thesaurus baseline: The Thesaurus baseline is based on an expansion of simple forms of framings. Specifically, the verbs gain, lose, give, and perform, are used as search terms to find related verbs in a standard but robust resource thesaurus.com (referred to as \"Thesaurus\"). The verbs thus found are grouped into these same four categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "\u2022 PERFORM 44 The resulting Thesaurus verb list is publicly released here:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "https://social-threats.github. io/panacea-ask-detection/resources/ thesaurus_based_verbsList.txt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "We also adopt categorial variations through CATVAR (Habash and Dorr, 2003) to map between different parts of speech, e.g., winner(N) \u2192 win(V). STYLUS, LCS+ and Thesaurus contain verbs only, but asks/framings are often nominalized. For example, you can reference your gift card is an implicit ask to examine a gift card, yet without CAT-VAR this ask is potentially missed. CATVAR recognizes reference as a nominal form of refer, thus enabling the identification of this ask as a PERFORM.", "cite_spans": [ { "start": 51, "end": 74, "text": "(Habash and Dorr, 2003)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2." }, { "text": "Intrinsic evaluation of our resources is based on comparison of ask/framing detection to an adjudicated ground truth (henceforth, GT), a set of 472 clauses from system output on 20 unseen emails. These 20 emails are a random subset of 2600+ messages collected in an email account set up to receive messages from an internal red team as well as \"legitimate\" messages from corporate and academic mailing lists. As alluded to earlier, these 20 emails are distinct from the dataset used for resource adaptation to produce the task-related LCS+. The GT is produced through human adjudication and correction by a computational linguist 4 of initial ask/framing labels automatically assigned by our system to the 472 clauses. System output also includes the identification of a \"top ask\" for each email, based on the degree to which ask argument positions are filled. 5 Top asks are adjudicated by the computational linguist once the ask/framing labels are adjudicated. The resulting GT is accessible here: https://social-threats.github.io/ panacea-ask-detection/data/. The GT is used to measure the precision/recall/F of three of three variants of ask detection output (Ask, Framing, and Top Ask) corresponding to our three lexica: Thesaurus, STYLUS, and LCS+. LCS+ is favored (with statistical significance) against the two very strong baselines, Thesaurus and STYLUS. Table 3 presents results: Recall for framings is highest for STYLUS, but at the cost of higher false positives (lower precision). F-scores increase for STYLUS over Thesaurus, and for LCS+ over STYLUS. McNemar (McNemar, 1947) tests yield statistically significant differences for asks/framings at the 2% level between Thesaurus and LCS+ and between STYLUS and LCS+. 6 It should be noted that not all clauses in GT are ask or framing: vast majority (80%) are neither (i.e., they are true negatives). We note that an alternative to the Thesaurus and LCS baselines would be a bag-of-words lexicon, with no organizational structure. However, the key contribution of this work is the ease of adaptation through classes, obviating the need for training data (which are exceedingly difficult to obtain). Classes enable extension of a small set of verbs to a larger range of options, e.g., if the human determines from a small set of task-related emails that provide is relevant, the task-adapted lexicon will include administer, contribute, and donate for free. If a class-based lexical organization is replaced by bag-of-words, we stand to lose efficient (1-person-day) resource adaptation and, moreover, training data would be needed. A first step toward extrinsic evaluation is inspection of responses generated from each resource's top ask/framing pairs. Table 1 (given earlier) shows LCS+ ask/framing pairs Table 2 : Lexical organization of LCS+ relies on Ask Categories (PERFORM, GIVE) and Framing Categories (GIVE, LOSE). Italicized exemplars with boldfaced triggers illustrate usage for each class. Boldfaced class numbers indicate those STYLUS classes that were modified to yield the LCS+ resource. Below are corresponding examples of generated responses 7 for all 3 resources, based on a templatic approach that leverages ask/framing hierarchical structure and corresponding confidence scores. This module is part of a larger, separate publication. There are qualitative differences in these responses. For example, in (a) Thesaurus (T) yields no asks/framings; thus a canned response is generated. By contrast, the same email yields a more responsive output for STYLUS (S), and a more focused response for LCS+ (L). Similar distinctions are found for responses in (b) and (c). Note that in the LCS+ condition, if there is no match found using LCS+, downstream response generation prompts the attacker (e.g., \"please clarify\") until an interpretable ask or framing appears. In this SE task, not all responses move the conversation forward. A central goal of the SE task is to waste the attacker's time, play along, and possibly extract information that could unveil their identity.", "cite_spans": [ { "start": 861, "end": 862, "text": "5", "ref_id": null }, { "start": 1557, "end": 1588, "text": "STYLUS. McNemar (McNemar, 1947)", "ref_id": null } ], "ref_spans": [ { "start": 1364, "end": 1371, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 2715, "end": 2738, "text": "Table 1 (given earlier)", "ref_id": "TABREF1" }, { "start": 2768, "end": 2775, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "3." }, { "text": "LCS is used in interlingual machine translation (Voss and Dorr, 1995; Habash and Dorr, 2002) , lexical acquisition 7 For brevity, excerpts are shown in lieu of full emails. 8 LCS+ detects both GIVE/send and PERFORM/respond. (Habash et al., 2006) , cross-language information retrieval (Levow et al., 2000) , language generation (Traum and Habash, 2000) , and intelligent language tutoring (Dorr, 1997) . STYLUS (Dorr and Voss, 2018) and (Dorr and Olsen, 2018) systematizes LCS based on several studies (Levin and Rappaport Hovav, 1995; Rappaport Hovav and Levin, 1998) , but to our knowledge our work is the first use of LCS in a conversational context, within a cyber domain.", "cite_spans": [ { "start": 48, "end": 69, "text": "(Voss and Dorr, 1995;", "ref_id": "BIBREF45" }, { "start": 70, "end": 92, "text": "Habash and Dorr, 2002)", "ref_id": "BIBREF26" }, { "start": 115, "end": 116, "text": "7", "ref_id": null }, { "start": 224, "end": 245, "text": "(Habash et al., 2006)", "ref_id": "BIBREF27" }, { "start": 285, "end": 305, "text": "(Levow et al., 2000)", "ref_id": "BIBREF36" }, { "start": 328, "end": 352, "text": "(Traum and Habash, 2000)", "ref_id": "BIBREF44" }, { "start": 389, "end": 401, "text": "(Dorr, 1997)", "ref_id": "BIBREF22" }, { "start": 411, "end": 432, "text": "(Dorr and Voss, 2018)", "ref_id": "BIBREF48" }, { "start": 437, "end": 459, "text": "(Dorr and Olsen, 2018)", "ref_id": "BIBREF19" }, { "start": 502, "end": 535, "text": "(Levin and Rappaport Hovav, 1995;", "ref_id": "BIBREF34" }, { "start": 536, "end": 568, "text": "Rappaport Hovav and Levin, 1998)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4." }, { "text": "Our approach relates to work on conversational agents (CAs), where neural models automatically generate responses (Gao et al., 2019; Santhanam and Shaikh, 2019) , topic models produce focused responses (Dziri et al., 2018) , self-disclosure yields targeted responses (Ravichander and Black, 2018) , and SE detection employs topic models (Bhakta and Harris, 2015) and NLP of conversations (Sawa et al., 2016) . However, all such approaches are limited to a pre-defined set of topics, constrained by the training corpus. Other prior work focuses on persuasion detection/ prediction (Hidey and McKeown, 2018) by leveraging argument structure, but for the purpose of judging when a persuasive attempt might be successful in subreddit discussions dedicated to changing opinions (ChangeMyView). Our work aims to achieve effective dialogue for countering (rather than adopting) persuasive attempts. Text-based semantic analysis for SE detection (Kim et al., 2018) is related to our work but differs in that our work focuses not just on detecting an attack, but on engaging with an attacker. Whereas a bot might be employed to warn a potential victim that an attack is underway, our bots are designed to communicate with a social engineer in ways that elicit identifying information.", "cite_spans": [ { "start": 114, "end": 132, "text": "(Gao et al., 2019;", "ref_id": "BIBREF25" }, { "start": 133, "end": 160, "text": "Santhanam and Shaikh, 2019)", "ref_id": "BIBREF42" }, { "start": 202, "end": 222, "text": "(Dziri et al., 2018)", "ref_id": "BIBREF24" }, { "start": 267, "end": 296, "text": "(Ravichander and Black, 2018)", "ref_id": "BIBREF41" }, { "start": 337, "end": 362, "text": "(Bhakta and Harris, 2015)", "ref_id": "BIBREF16" }, { "start": 388, "end": 407, "text": "(Sawa et al., 2016)", "ref_id": "BIBREF43" }, { "start": 580, "end": 605, "text": "(Hidey and McKeown, 2018)", "ref_id": "BIBREF28" }, { "start": 938, "end": 956, "text": "(Kim et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4." }, { "text": "Both STYLUS and LCS+ support ask/framing detection in service of bot-produced responses. Intrinsically, LCS+ is superior to both STYLUS and Thesaurus when measured against human-adjudicated output, verified for significance by McNemar tests at the 2% level. Extrinsically, STYLUS supports more responsive bot outputs and LCS+ supports more focused bot outputs. A more general advantage of adapting LCS+ to the SE domain is that it can act as a guideline for developing similar resources for other domains which will similarly support focused outputs appropriate for particular domains. The main contribution of this paper is not development of a particular task-specific resource, nor to suggest that LCS+ is a generic resource for many tasks, but to present a systematic, efficient approach to resource adaptation technique that can generalize to other tasks for improved task-specific performance, e.g., understanding viewpoints in social media or detecting motives behind activities of political groups. We acknowledge that our extrinsic evaluation is limited. While we have demonstrated the efficacy of ask detection approaches on a set of representative emails, a quantitative evaluation is required to test the statistical significance of our extrinsic observations. Future work is planned to conduct experiments with crowd-sourced workers judging the efficacy and effectiveness of generated responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "LCS is publicly available at https://github.com/ ihmc/LCS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It should be noted that this resource adaptation is based on an analysis of emails not related to, and without access to, the adjudicated ground truth described in section 3. That is, the 46 emails used for resource adaptation are distinct from the 20 emails used for creating adjudicated ground truth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The adjudicator is an author but is not the algorithm developer, who is also an author.5 Argument positions express information such as the ask type (i.e. PERFORM), context to the ask (i.e. financial), and the ask target (e.g., \"you\" in \"Did you send me the money?\").6 Tested values were TP+TN vs FP+FN, i.e., significance of change in total error rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by DARPA through AFRL Contract FA8650-18-C-7881 and through Army Contract W31P4Q-17-C-0066. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of DARPA, AFRL, Army, or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Put Verbs: Position your cursor here 10.1 Remove Verbs: Delete virus from machine", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Put Verbs: Position your cursor here 10.1 Remove Verbs: Delete virus from machine", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Banish Verbs\u21925 deleted (banish, deport, evacuate, extradite, recall): Remove fee from your account 10.5 Steal Verbs: Redeem coupon below", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Banish Verbs\u21925 deleted (banish, deport, evacu- ate, extradite, recall): Remove fee from your account 10.5 Steal Verbs: Redeem coupon below", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cheat Verbs: Free yourself from debt 11.3 Bring and Take Verbs: Bring me a gift card 13.5.2 Obtain: Purchase two gift cards", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheat Verbs: Free yourself from debt 11.3 Bring and Take Verbs: Bring me a gift card 13.5.2 Obtain: Purchase two gift cards", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "check, eye, try, view, visit): View this website 37.1 Transfer of Message: Ask for a refund 37.2 Tell Verbs: Tell them $50 per card 37", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sight Verbs\u21921 deleted (regard), 44 added (e.g., check, eye, try, view, visit): View this website 37.1 Transfer of Message: Ask for a refund 37.2 Tell Verbs: Tell them $50 per card 37.4 Communication: Sign the back of the card 42.1 Murder Verbs: Eliminate your debt here 44 Destroy Verbs: Destroy the card", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Price Verbs: Calculate an amount here GIVE", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Price Verbs: Calculate an amount here GIVE:", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Send Verbs: Send me the gift cards 13.1 Give Verbs: Give today 13.2 Contribute Verbs: Donate! 13.3 Future Having: Advance me $100 13.4.1 Verbs of Fulfilling: Credit your account", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Send Verbs: Send me the gift cards 13.1 Give Verbs: Give today 13.2 Contribute Verbs: Donate! 13.3 Future Having: Advance me $100 13.4.1 Verbs of Fulfilling: Credit your account", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Want Verbs: I need three gift cards LOSE", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Want Verbs: I need three gift cards LOSE:", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "forfeit, lose, relinquish, sacrifice): Don't forfeit this chance! 10.6 Cheat Verbs: Are your funds depleted? 17.1 Throw Verbs: Don", "authors": [], "year": null, "venue": "Steal Verbs\u219211 added", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steal Verbs\u219211 added (e.g., forfeit, lose, relin- quish, sacrifice): Don't forfeit this chance! 10.6 Cheat Verbs: Are your funds depleted? 17.1 Throw Verbs: Don't toss out this coupon 17.2 Pelt Verbs: Scams bombarding you?", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hit Verbs: Don't be beaten by debt 18.2 Swat Verbs: Sluggish market getting you down? 18.3 Spank Verbs: Clobbered by fees", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hit Verbs: Don't be beaten by debt 18.2 Swat Verbs: Sluggish market getting you down? 18.3 Spank Verbs: Clobbered by fees?", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Impact by Contact: Avoid being hit by malware 19 Poke Verbs: Stuck with debt?", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Impact by Contact: Avoid being hit by malware 19 Poke Verbs: Stuck with debt?", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Characterize Verbs\u219216 deleted (e.g., appreciate, envisage): Repudiated by creditors?", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Characterize Verbs\u219216 deleted (e.g., appreciate, envisage): Repudiated by creditors?", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Orphan Verbs\u21925 deleted (apprentice, canonize, cuckold, knight, recruit): Avoid crippling debt", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Orphan Verbs\u21925 deleted (apprentice, canonize, cuckold, knight, recruit): Avoid crippling debt", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Amuse Verbs\u219291 deleted (e.g., amaze, amuse, gladden): Don't be disarmed by hackers", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amuse Verbs\u219291 deleted (e.g., amaze, amuse, gladden): Don't be disarmed by hackers", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Are you lamenting your credit score?", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Are you lamenting your credit score?", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Complain Verbs: Want your gripes answered? 42.1 Murder Verbs: Debt killing your credit? 42.2 Poison Verbs: Strangled by debt? 44 Destroy Verbs: PC destroyed by malware? 48.2 Disappearance: Your account will expire 51.2 Leave Verbs", "authors": [], "year": null, "venue": "Marvel Verbs\u21921 deleted (feel): Living in fear? 33 Judgment Verbs: Need to remove penalties? 37.8", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marvel Verbs\u21921 deleted (feel): Living in fear? 33 Judgment Verbs: Need to remove penalties? 37.8 Complain Verbs: Want your gripes answered? 42.1 Murder Verbs: Debt killing your credit? 42.2 Poison Verbs: Strangled by debt? 44 Destroy Verbs: PC destroyed by malware? 48.2 Disappearance: Your account will expire 51.2 Leave Verbs: Found your abandoned prize GAIN: 13.5.1 Get: You are a winner of 1M Eu.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Obtain: You can recover your credit rating 6", "authors": [], "year": null, "venue": "Bibliographical References", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "5.2 Obtain: You can recover your credit rating 6. Bibliographical References", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semantic analysis of dialogs to detect social engineering attacks", "authors": [ { "first": "R", "middle": [], "last": "Bhakta", "suffix": "" }, { "first": "I", "middle": [ "G" ], "last": "Harris", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)", "volume": "", "issue": "", "pages": "424--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhakta, R. and Harris, I. G. (2015). Semantic analysis of dialogs to detect social engineering attacks. Proceedings of the 2015 IEEE 9th International Conference on Se- mantic Computing (IEEE ICSC 2015), pages 424-427.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Linguistic Object Model", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Chang", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Shahani", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Cipollone", "suffix": "" }, { "first": "M", "middle": [ "V" ], "last": "Calcagno", "suffix": "" }, { "first": "M", "middle": [ "J B" ], "last": "Olsen", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Parkinson", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, S. C., Shahani, R. C., Cipollone, D. J., Calcagno, M. V., Olsen, M. J. B., and Parkinson, D. J. (2007). Lin- guistic Object Model, January. 7,171,352.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lexical Semantic Structure", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Chang", "suffix": "" }, { "first": "R", "middle": [ "C" ], "last": "Shahani", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Cipollone", "suffix": "" }, { "first": "M", "middle": [ "V" ], "last": "Calcagno", "suffix": "" }, { "first": "M", "middle": [ "J B" ], "last": "Olsen", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Parkinson", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, S. C., Shahani, R. C., Cipollone, D. J., Calcagno, M. V., Olsen, M. J. B., and Parkinson, D. J. (2010). Lex- ical Semantic Structure, March. 7,689,410.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Lexical conceptual structure of literal and metaphorical spatial language: A case study of push", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "M", "middle": [ "B" ], "last": "Olsen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First International Workshop on Spatial Language Understanding", "volume": "", "issue": "", "pages": "31--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B. J. and Olsen, M. B. (2018). Lexical conceptual structure of literal and metaphorical spatial language: A case study of push. In Proceedings of the First Inter- national Workshop on Spatial Language Understanding, pages 31-40.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detecting asks in social engineering attacks: Impact of linguistic and structural knowledge", "authors": [ { "first": "B", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "A", "middle": [], "last": "Bhatia", "suffix": "" }, { "first": "A", "middle": [], "last": "Dalton", "suffix": "" }, { "first": "B", "middle": [], "last": "Mather", "suffix": "" }, { "first": "B", "middle": [], "last": "Hebenstreit", "suffix": "" }, { "first": "S", "middle": [], "last": "Santhanam", "suffix": "" }, { "first": "Z", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "S", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "T", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 2020, "venue": "Proceedings of Thirty-Fourth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B., Bhatia, A., Dalton, A., Mather, B., Hebenstreit, B., Santhanam, S., Cheng, Z., Zemel, S., and Strza- lkowski, T. (2020). Detecting asks in social engineering attacks: Impact of linguistic and structural knowledge. In Proceedings of Thirty-Fourth AAAI Conference on Ar- tificial Intelligence 2020.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Machine Translation: A View from the Lexicon", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B. J. (1993). Machine Translation: A View from the Lexicon. MIT Press, Cambridge, MA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Large-Scale Dictionary Construction for Foreign Language Tutoring and Interlingual Machine Translation", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1997, "venue": "Machine Translation", "volume": "12", "issue": "", "pages": "271--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, B. J. (1997). Large-Scale Dictionary Construction for Foreign Language Tutoring and Interlingual Machine Translation. Machine Translation, 12:271-322.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Requesting in social interaction", "authors": [ { "first": "P", "middle": [], "last": "Drew", "suffix": "" }, { "first": "E", "middle": [], "last": "Couper-Kuhlen", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew, P. and Couper-Kuhlen, E. (2014). Requesting in so- cial interaction. John Benjamins Publishing Company.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Augmenting neural response generation with context-aware topical attention", "authors": [ { "first": "N", "middle": [], "last": "Dziri", "suffix": "" }, { "first": "E", "middle": [], "last": "Kamalloo", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Mathewson", "suffix": "" }, { "first": "O", "middle": [], "last": "Zaiane", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01063" ] }, "num": null, "urls": [], "raw_text": "Dziri, N., Kamalloo, E., Mathewson, K. W., and Za- iane, O. (2018). Augmenting neural response genera- tion with context-aware topical attention. arXiv preprint arXiv:1811.01063.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Neural approaches to conversational ai", "authors": [ { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "L", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Foundations and Trends R in Information Retrieval", "volume": "13", "issue": "2-3", "pages": "127--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, J., Galley, M., Li, L., et al. (2019). Neural approaches to conversational ai. Foundations and Trends R in Infor- mation Retrieval, 13(2-3):127-298.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Handling Translation Divergences: Combining Statistical and Symbolic Techniques in Generation-Heavy Machine Translation", "authors": [ { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Fifth Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "84--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Habash, N. and Dorr, B. J. (2002). Handling Transla- tion Divergences: Combining Statistical and Symbolic Techniques in Generation-Heavy Machine Translation. In Proceedings of the Fifth Conference of the Association for Machine Translation in the Americas, pages 84-93, Tiburon, CA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Challenges in Building an Arabic GHMT system with SMT Components", "authors": [ { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "C", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "56--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Habash, N., Dorr, B. J., and Monz, C. (2006). Challenges in Building an Arabic GHMT system with SMT Compo- nents. In Proceedings of the 7th Conference of the Asso- ciation for Machine Translation in the Americas, pages 56-65, Boston, MA, August.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Persuasive Influence Detection: The Role of Argument Sequencing", "authors": [ { "first": "C", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "5173--5180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hidey, C. and McKeown, K. (2018). Persuasive Influence Detection: The Role of Argument Sequencing. In Pro- ceedings of the Thirty-Second AAAI Conference on Arti- ficial Intelligence, pages 5173-5180, San Francisco, Cal- ifornia, USA.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Semantics and Cognition", "authors": [ { "first": "R", "middle": [], "last": "Jackendoff", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jackendoff, R. (1983). Semantics and Cognition. MIT Press, Cambridge, MA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Semantic Structures", "authors": [ { "first": "R", "middle": [], "last": "Jackendoff", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jackendoff, R. (1990). Semantic Structures. MIT Press, Cambridge, MA.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The Proper Treatment of Measuring Out, Telicity, and Perhaps Even Quantification in English", "authors": [ { "first": "R", "middle": [], "last": "Jackendoff", "suffix": "" } ], "year": 1996, "venue": "Natural Language and Linguistic Theory", "volume": "14", "issue": "", "pages": "305--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jackendoff, R. (1996). The Proper Treatment of Measur- ing Out, Telicity, and Perhaps Even Quantification in En- glish. Natural Language and Linguistic Theory, 14:305- 354.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Catch me, yes we can!-pwning social engineers using natural language processing techniques in real-time", "authors": [ { "first": "M", "middle": [], "last": "Kim", "suffix": "" }, { "first": "C", "middle": [], "last": "Song", "suffix": "" }, { "first": "H", "middle": [], "last": "Kim", "suffix": "" }, { "first": "D", "middle": [], "last": "Park", "suffix": "" }, { "first": "Y", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "E", "middle": [], "last": "Namkung", "suffix": "" }, { "first": "I", "middle": [ "G" ], "last": "Harris", "suffix": "" }, { "first": "Carlsson", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, M., Song, C., Kim, H., Park, D., Kwon, Y., Namkung, E., Harris, I. G., and Carlsson, M. (2018). Catch me, yes we can!-pwning social engineers using natural language processing techniques in real-time.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A Large-scale Classification of English Verbs", "authors": [ { "first": "K", "middle": [], "last": "Kipper", "suffix": "" }, { "first": "A", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "N", "middle": [], "last": "Ryant", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2007, "venue": "Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kipper, K., Korhonen, A., Ryant, N., and Palmer, M. (2007). A Large-scale Classification of English Verbs. In Language Resources and Evaluation.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Unaccusativity: At the Syntax-Lexical Semantics Interface, Linguistic Inquiry Monograph 26", "authors": [ { "first": "B", "middle": [], "last": "Levin", "suffix": "" }, { "first": "M", "middle": [], "last": "Rappaport Hovav", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, B. and Rappaport Hovav, M. (1995). Unaccusativ- ity: At the Syntax-Lexical Semantics Interface, Linguistic Inquiry Monograph 26. MIT Press, Cambridge, MA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "English Verb Classes and Alternations: A Preliminary Investigation", "authors": [ { "first": "B", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levin, B. (1993). English Verb Classes and Alternations: A Preliminary Investigation. The University of Chicago Press.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Construction of Chinese-English Semantic Hierarchy for Cross-language Retrieval", "authors": [ { "first": "G", "middle": [], "last": "Levow", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" }, { "first": "Lin", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levow, G., Dorr, B. J., and Lin, D. (2000). Construction of Chinese-English Semantic Hierarchy for Cross-language Retrieval.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Building Verb Meanings", "authors": [ { "first": "Rappaport", "middle": [], "last": "Hovav", "suffix": "" }, { "first": "M", "middle": [], "last": "Levin", "suffix": "" }, { "first": "B", "middle": [], "last": "", "suffix": "" } ], "year": 1998, "venue": "The Projection of Arguments: Lexical and Compositional Factors", "volume": "", "issue": "", "pages": "97--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rappaport Hovav, M. and Levin, B. (1998). Building Verb Meanings. In M. Butt et al., editors, The Projection of Arguments: Lexical and Compositional Factors, pages 97-134. CSLI Publications, Stanford, CA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Note on the sampling error of the difference between correlated proportions or percentages", "authors": [ { "first": "Q", "middle": [], "last": "Mcnemar", "suffix": "" } ], "year": 1947, "venue": "Psychometrika", "volume": "12", "issue": "2", "pages": "153--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percent- ages. Psychometrika, 12(2):153-157, jun.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The Semantics and Pragmatics of Lexical and Grammatical Aspect", "authors": [ { "first": "M", "middle": [ "B" ], "last": "Olsen", "suffix": "" } ], "year": 1994, "venue": "Studies in the Linguistic Sciences", "volume": "24", "issue": "1-2", "pages": "361--375", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olsen, M. B. (1994). The Semantics and Pragmatics of Lexical and Grammatical Aspect. Studies in the Linguis- tic Sciences, 24(1-2):361-375.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "VerbNet: Capturing English Verb behavior, Meaning and Usage", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "C", "middle": [], "last": "Bonial", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Hwang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, M., Bonial, C., and Hwang, J. D. (2017). VerbNet: Capturing English Verb behavior, Meaning and Usage.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "An empirical study of self-disclosure in spoken dialogue systems", "authors": [ { "first": "A", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "A", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "253--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ravichander, A. and Black, A. W. (2018). An empirical study of self-disclosure in spoken dialogue systems. In Proceedings of the 19th Annual SIGdial Meeting on Dis- course and Dialogue, Melbourne, Australia, July 12-14, 2018, pages 253-263.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A survey of natural language generation techniques with a focus on dialogue systems-past, present and future directions", "authors": [ { "first": "S", "middle": [], "last": "Santhanam", "suffix": "" }, { "first": "S", "middle": [], "last": "Shaikh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.00500" ] }, "num": null, "urls": [], "raw_text": "Santhanam, S. and Shaikh, S. (2019). A survey of natu- ral language generation techniques with a focus on dia- logue systems-past, present and future directions. arXiv preprint arXiv:1906.00500.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Detection of social engineering attacks through natural language processing of conversations", "authors": [ { "first": "Y", "middle": [], "last": "Sawa", "suffix": "" }, { "first": "R", "middle": [], "last": "Bhakta", "suffix": "" }, { "first": "I", "middle": [], "last": "Harris", "suffix": "" }, { "first": "C", "middle": [], "last": "Hadnagy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 IEEE Tenth International Conference on Semantic Computing (ICSC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sawa, Y., Bhakta, R., Harris, I., and Hadnagy, C. (2016). Detection of social engineering attacks through natural language processing of conversations. In Proceedings of the 2016 IEEE Tenth International Conference on Se- mantic Computing (ICSC), pages 262-265, 02.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Generation from Lexical Conceptual Structures", "authors": [ { "first": "D", "middle": [], "last": "Traum", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Workshop on Applied Interlinguas, North American Association for Computational Linguistics / Applied NLP Conference", "volume": "", "issue": "", "pages": "34--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Traum, D. and Habash, N. (2000). Generation from Lex- ical Conceptual Structures. In Proceedings of the Work- shop on Applied Interlinguas, North American Associa- tion for Computational Linguistics / Applied NLP Con- ference, pages 34-41.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Toward a Lexicalized Grammar for Interlinguas", "authors": [ { "first": "C", "middle": [ "R" ], "last": "Voss", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 1995, "venue": "J. of Machine Translation", "volume": "10", "issue": "", "pages": "143--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voss, C. R. and Dorr, B. J. (1995). Toward a Lexicalized Grammar for Interlinguas. J. of Machine Translation, 10:143-184.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Texts as actions: Requests in online chats between reference librarians and library patrons", "authors": [ { "first": "A", "middle": [], "last": "Zemel", "suffix": "" } ], "year": 2017, "venue": "Journal of the Association for Information Science and Technology", "volume": "67", "issue": "7", "pages": "1687--1697", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zemel, A. (2017). Texts as actions: Requests in online chats between reference librarians and library patrons. Journal of the Association for Information Science and Technology, 67(7):1687-1697.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "STYLUS: A Resource for Systematically Derived Language Usage. Association for Computational Linguistics", "authors": [ { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "Clare", "middle": [], "last": "Voss", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorr, Bonnie and Voss, Clare. (2018). STYLUS: A Re- source for Systematically Derived Language Usage. As- sociation for Computational Linguistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "A Categorial Variation Database for English", "authors": [ { "first": "Nizar", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Bonnie", "middle": [ "J" ], "last": "Dorr", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nizar Habash and Bonnie J. Dorr. (2003). A Categorial Variation Database for English.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": ": act, do, execute, perform \u2022 GIVE (55): commit, donate, grant, provide \u2022 LOSE (41): expend, forefeit, expend, squander \u2022 GAIN (53): clean, get, obtain, profit, reap" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "(a) T: None, None S: None, GAIN/won(1.7Eu) (b) T: PERFORM/do(that), LOSE/lose(money) S: GAIN/won(money), GIVE/send(money) (c) T: None, GAIN/get(20%) S: PERFORM/sign(http:..), GAIN/get(20%)" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "(a) T: How are you? Thanks. S: ...too good to be true. What should I do? L+: I will contact asap. (b) T: Thanks for getting in touch, need more info. S: Nervous about this. Your name? L+: I would respond, 8 but I need more info. (c) T: What should I do now? S: Website doesn't open, is this the link? L+: Thanks, need more info before I paste link" }, "TABREF0": { "num": null, "content": "
won
(1.7Eu)
(b) You won $1K. Did you sendGIVELOSE
money? Do that by 9pm or losesendlose
money. Respond asap.(money)(money)
(c) PERFORMGAIN
pasteget
(http...)(20%)
", "text": "Get 20% discount. Check eligibility or paste this link: http.... Sign up for email alerts.", "type_str": "table", "html": null }, "TABREF1": { "num": null, "content": "", "text": "LCS+ Ask/Framing output for three SE emails", "type_str": "table", "html": null }, "TABREF2": { "num": null, "content": "
", "text": "shows the refined lexical organization for LCS+ with ask categories (PERFORM, GIVE) and framing categories (GAIN, LOSE). Boldfaced class numbers indicate the STYLUS classes that were modified. The resulting LCS+ resource drives our SE detection/response system. Each class includes italicized examples with boldfaced triggers. The table details changes to PERFORM and LOSE categories. For PERFORM, there are 6 deleted verbs across 10.2 (Banish Verbs) and 30.2 (Sight Verbs) and also 44 new verbs added to 30.2. For LOSE, 7 classes are associated with additions and/or deletions, as detailed in the table.", "type_str": "table", "html": null }, "TABREF4": { "num": null, "content": "
: Impact of lexical resources on ask/framing detec-
tion: Thesaurus, STYLUS, LCS+
whose corresponding (T)hesaurus and (S)TYLUS pairs are:
", "text": "", "type_str": "table", "html": null } } } }