{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:44:43.370907Z" }, "title": "Assessing Social License to Operate from the Public Discourse on Social Media", "authors": [ { "first": "Chang", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": {} }, "email": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Paris", "suffix": "", "affiliation": { "laboratory": "", "institution": "CSIRO", "location": { "addrLine": "Data61" } }, "email": "cecile.paris@data61.csiro.au" }, { "first": "Ross", "middle": [], "last": "Sparks", "suffix": "", "affiliation": { "laboratory": "", "institution": "CSIRO", "location": { "addrLine": "Data61" } }, "email": "ross.sparks@data61.csiro.au" }, { "first": "Surya", "middle": [], "last": "Nepal", "suffix": "", "affiliation": { "laboratory": "", "institution": "CSIRO", "location": { "addrLine": "Data61" } }, "email": "surya.nepal@data61.csiro.au" }, { "first": "Keith", "middle": [], "last": "Vanderlinden", "suffix": "", "affiliation": { "laboratory": "", "institution": "Calvin University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Organisations are monitoring their Social License to Operate (SLO) with increasing regularity. SLO, the level of support organisations gain from the public, is typically assessed through surveys or focus groups, which require expensive manual efforts and yield quickly-outdated results. In this paper, we present SIRTA (Social Insight via Real-Time Text Analytics), a novel real-time text analytics system for assessing and monitoring organisations' SLO levels by analysing the public discourse from social posts. To assess SLO levels, our insight is to extract and transform peoples' stances towards an organisation into SLO levels. SIRTA achieves this by performing a chain of three text classification tasks, where it identifies task-relevant social posts, discovers key SLO risks discussed in the posts, and infers stances specific to the SLO risks. We leverage recent language understanding techniques (e.g., BERT) for building our classifiers. To monitor SLO levels over time, SIRTA employs quality control mechanisms to reliably identify SLO trends and variations of multiple organisations in a market. These are derived from the smoothed time series of their SLO levels based on exponentially-weighted moving average (EWMA) calculation. Our experimental results show that SIRTA is highly effective in distilling stances from social posts for SLO level assessment, and that the continuous monitoring of SLO levels afforded by SIRTA enables the early detection of critical SLO changes.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Organisations are monitoring their Social License to Operate (SLO) with increasing regularity. SLO, the level of support organisations gain from the public, is typically assessed through surveys or focus groups, which require expensive manual efforts and yield quickly-outdated results. In this paper, we present SIRTA (Social Insight via Real-Time Text Analytics), a novel real-time text analytics system for assessing and monitoring organisations' SLO levels by analysing the public discourse from social posts. To assess SLO levels, our insight is to extract and transform peoples' stances towards an organisation into SLO levels. SIRTA achieves this by performing a chain of three text classification tasks, where it identifies task-relevant social posts, discovers key SLO risks discussed in the posts, and infers stances specific to the SLO risks. We leverage recent language understanding techniques (e.g., BERT) for building our classifiers. To monitor SLO levels over time, SIRTA employs quality control mechanisms to reliably identify SLO trends and variations of multiple organisations in a market. These are derived from the smoothed time series of their SLO levels based on exponentially-weighted moving average (EWMA) calculation. Our experimental results show that SIRTA is highly effective in distilling stances from social posts for SLO level assessment, and that the continuous monitoring of SLO levels afforded by SIRTA enables the early detection of critical SLO changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Social License to Operate (SLO) represents the ongoing acceptance (or lack thereof) of an organisation's standard business practices or operating procedures by the general public (or the society at large) (Moffat and Zhang, 2014; Gunningham et al., 2004; Moffat et al., 2016) . It captures the opinion of the public towards a business. Low SLO levels can increase business risks significantly, and, in the worst case scenarios, prevent the operation of an organisation. To obtain a high SLO level, organisations typically need to build trust with the community and then work to maintain that trust. Traditionally, the SLO of an organisation is evaluated using surveys and focus groups (Moffat and Zhang, 2014) , during which a diversity of opinions is collected and the results then quantified. These effective techniques provide indepth analysis. They are, however, manual practices and thus expensive to do on a frequent basis (Moffat and Zhang, 2014) . In addition, the samples of a survey are often limited, and, as the time intervals between consecutive surveys are usually long, an organisation might not detect critical changes in its SLO levels in a timely fashion, leading to exposure to potential risks. The public discussions continuously taking place on social media, where people are not shy about expressing their opinions about a number of topics, including companies and specific projects, provide an opportunity to monitor SLO in real-time, on a continuous basis and at scale. This is what we aim to do in this work.", "cite_spans": [ { "start": 205, "end": 229, "text": "(Moffat and Zhang, 2014;", "ref_id": "BIBREF9" }, { "start": 230, "end": 254, "text": "Gunningham et al., 2004;", "ref_id": "BIBREF2" }, { "start": 255, "end": 275, "text": "Moffat et al., 2016)", "ref_id": "BIBREF10" }, { "start": 685, "end": 709, "text": "(Moffat and Zhang, 2014)", "ref_id": "BIBREF9" }, { "start": 929, "end": 953, "text": "(Moffat and Zhang, 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first determined the possible facets of SLO for our domain, specifically economic (e.g., the public is in favor of a project because it will create jobs), environmental (e.g., the public believes the company has a good/bad environmental record) and social (e.g., the public believes the company addresses -or not -its social responsibilities). We then built SIRTA (Social Insight via Real-Time Text Analytics), a novel automated system that combines advanced text analytics with real-time monitoring techniques to assess and monitor the SLO levels of a collection of organisations (in the same industry) over time. By taking the \"pulse\" of the public towards an organisation in real time, through the lens of social media, this tool complements the in-depth analysis done through surveys and focus groups, providing an early indication of trends, and potentially informing the design of in-depth surveys. Figure 1 : The dashboard of SIRTA for SLO assessment and monitoring plus the SIRTA architecture. Figure 1 shows the dashboard of SIRTA for monitoring several major mining companies in the country. Its main functionality is demonstrated in three panels: 1) the SLO Weekly Overview, a list of real-time (weekly) numerical scores representing the SLO levels for the organisations under consideration, 2) the SLO Trend, which plots the long-term trend of the SLO level of a selected organisation (here, Rio Tinto), compared with the general trend of the market (all the organisations together), and 3) the Social Feed, with the most recent social media posts (e.g., tweets) about the selected organisation, with both the stances and SLO risk categories (i.e., environment, social, and economic) identified.", "cite_spans": [], "ref_spans": [ { "start": 908, "end": 916, "text": "Figure 1", "ref_id": null }, { "start": 1005, "end": 1013, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To carry out the SLO assessment, SIRTA extracts opinion information from posts published on social media (Twitter), along the different SLO facets, and then transforms that information into SLO scores. In contrast to many opinion mining systems that rely primarily on sentiment analysis, e.g., (Pang et al., 2008) , we focus on stance detection (Mohammad et al., 2016a) , which is more suitable for our task because it indicates whether someone is for, neutral or against a specific company, not just whether the surface sentiment of their posts is positive or negative. The novel aspect of SIRTA's SLO assessment engine includes a specialised text classification pipeline (see the Text Analytic Pipeline on the bottom of Figure 1 ), where three chained text classification tasks are performed for opinion extraction: 1) relevance classification, for finding posts contributing to the SLO assessment, 2) risk classification, for identifying the different facet(s), or SLO risk(s), being discussed in the posts, and 3) risk-aware stance classification, for detecting stances in the posts that are specific to each SLO risk. The outcome from text analytic pipeline is fed into the SLO score computation component for converting the stances into numerical SLO scores. To train and evaluate the classifiers for each task above, we created both a silver standard and a gold standard dataset and employed state-of-the-art language understanding models such as BERT (Devlin et al., 2019) . To monitor the derived SLO scores, a monitoring engine keeps track of the time series of the SLO scores of multiple organisations operating in a market. Specifically, it leverages Control Charts (Kan, 2002) , a powerful tool for statistical process control. The monitoring engine discovers if an organisation is experiencing significant changes in its SLO score by contrasting its time series with a benchmark of the market.", "cite_spans": [ { "start": 294, "end": 313, "text": "(Pang et al., 2008)", "ref_id": "BIBREF15" }, { "start": 345, "end": 369, "text": "(Mohammad et al., 2016a)", "ref_id": "BIBREF11" }, { "start": 1459, "end": 1480, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF1" }, { "start": 1678, "end": 1689, "text": "(Kan, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 722, "end": 730, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct several quantitative experiments to evaluate the performance of our classifiers, and thus the effectiveness of our text analysis pipeline. We then present a case study, which suggests that SIRTA can identify periods of unusual changes early. This confirms our original hypothesis that we could harness social media to monitor SLO in real-time and at scale, in a relatively inexpensive manner, reserving the more expensive, traditional methods for circumstances where a more detailed assessment is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 SIRTA: Real-Time Text Analytics for SLO Assessment and Monitoring", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The bottom part of Figure 1 illustrates the architecture of SIRTA, which consists of two processing modules: the SLO assessment engine and the SLO monitoring engine. The assessment engine takes social feeds as inputs and generates SLO scores by first extracting opinions from social feeds via text analytics. Specifically, it performs the three text classification tasks mentioned earlier to extract, in realtime, opinions from a stream of posts, and then calculates the SLO scores.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 27, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Overview", "sec_num": "2.1" }, { "text": "To enable appropriate monitoring (the detection of a significant change), the opinions are aggregated regularly in different time frames (e.g., weekly). These are computed and stored in a dedicated database.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2.1" }, { "text": "The monitoring engine keeps track of the time series of different organisations' scores as well as that of the overall market over time. It first computes a benchmark SLO time series representing the context (market) where those organisations operate. This allows one to see when an organisation's score departs significantly from the benchmark. To identify such a departure, the monitoring engine applies quality control techniques (Kan, 2002) to compute control limits for bounding an organisation's time series and a departure occurs when the benchmark falls out of the bound.", "cite_spans": [ { "start": 433, "end": 444, "text": "(Kan, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "System Overview", "sec_num": "2.1" }, { "text": "The assessment engine transforms social posts into an organisation's SLO score with two modules: a text analytic pipeline and the SLO score computation. In the text analytic pipeline, three sequential tasks are performed: relevance classification, risk classification, and risk-aware stance classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Assessment Engine", "sec_num": "2.2" }, { "text": "SIRTA uses the Twitter API to collect all tweets containing the names of the organisations under consideration. It is, of course, inevitable that posts irrelevant to our task are also collected by our system. We thus need to discard these irrelevant posts and keep only the posts that can contribute to the SLO assessment. This is done through the relevance classification task, and a binary relevance classifier C r was trained for this task. The classifier C r reads a post x i and assigns it a relevance label\u0177 r . To train C r , we minimised the negative log-likelihood of the ground truth label: L r = \u2212 Nr i=1 y r log\u0177 r , where y r is x i 's true relevance label, and N r the training data size. To facilitate discussion, we use R o to denote the set of all relevant posts discussing organisation o detected by C r .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relevance Classification: Finding Task-Relevant Posts", "sec_num": "2.2.1" }, { "text": "As mentioned earlier, SLO can have many facets, or, put differently, SLO poses risks along various dimensions. In turn, an individual post can relate to different SLO risks. Consider, for example, the following two posts about mining companies. The first one is solely expressing an opinion about the company's handling of environmental concerns, thus contributing (negatively) to the SLO risk of environment for this company. In contrast, the second negatively mentions a company's actions with respect to both environmental and social concerns, thus contributing (again negatively) to the SLO risks of both environment and social for this company.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "\u2022 They don't even know which aquifer is the source of the Doongmabulla Springs, but Adani [the company name] is belittling crucial environmental studies as \"paperwork\". (SLO risk factor: environmental)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "\u2022 We are at BHP [the company name] HQ protesting against their toxic Olympic Dam uranium mine that fuels war and breaches land rights #uprootthesystem #nonukes #KeepItInTheGround (SLO risk factors: environmental and social)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "Being able to identify the specific SLO risks discussed in the public discourse allows an organisation to better identify and manage them. We took a two-step approach to detecting the risk factors mentioned in a post. First, we identified the SLO risks for our domain (mining), based on knowledge from domain experts and a literature survey. They are economic, social and environment. We note that, while these risks are fairly general, SLO risks might be different in different domains. Let K be the set of these risks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "K = {economic, environmental, social}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "We then trained a multi-label risk classifier C k to find all potential risk factors mentioned in a post x i \u2208 R o . The training involved minimising the one-vs-all loss, a commonly-used objective for multilabel classification (Tsoumakas and Katakis, 2007) . Formally, we calculated a binary cross entropy between the logits and ground-truth labels of the same training example,", "cite_spans": [ { "start": 227, "end": 256, "text": "(Tsoumakas and Katakis, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L k = y k i \u2022 log C k (x i ) + (1 \u2212 y k i ) \u2022 log(1 \u2212 C k (x i ))", "eq_num": "(1)" } ], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "where C k (x i ) produces the logits. y k i is the corresponding risk label of x i , which is multi-hot encoded; y k ij = 1 as long as x i belongs to the jth risk, otherwise y k ij = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Risk Classification: Discovering Key SLO Risks Mentioned in a Post", "sec_num": "2.2.2" }, { "text": "The first two classifiers identified the overall relevance of a post to our task, and to which specific SLO risk the post is relevant. In the final text analytics task, we extract the opinion of a post. Sentiment analysis (Liu, 2012 ) is a common opinion mining technique, but recent studies have shown that one's sentiment may not always reflect one's attitude towards a target (Sobhani et al., 2016; Mohammad et al., 2017) . For example, consider the following post about a mining company, \"This is huge! Our momentum is unstoppable. Every day we're closer to stopping Adani and saving our Reef\". Although the sentiment here is positive (\"This is huge!\"), the author's attitude towards the mining company (Adani) is negative. Therefore, instead of extracting the sentiment of the post, we propose to extract the stance of the author implied in their posts (Mohammad et al., 2016a; Augenstein et al., 2016; Sun et al., 2018) , which could be for, against, or neutral towards a target (an organisation). The stance could be extracted in two ways. We could employ a general stance classifier (to obtain the stance of any relevant post) or a riskaware stance classifier, that is a classifier specifically designed to detect the stance in posts discussing a specific risk factor. We posit that the language used to express stances vary with different risk factors (e.g., \"create jobs\" for economic vs. \"destroy the reef\" for environmental), and thus a risk-aware stance classifier would be more effective. Our experiments show that training a stance classifier for each risk factor indeed allows us to capture the stance more accurately (\u223c4% boost) than training a generic stance classifier to work across the classes (see Section 3.3 below).", "cite_spans": [ { "start": 222, "end": 232, "text": "(Liu, 2012", "ref_id": "BIBREF8" }, { "start": 379, "end": 401, "text": "(Sobhani et al., 2016;", "ref_id": "BIBREF17" }, { "start": 402, "end": 424, "text": "Mohammad et al., 2017)", "ref_id": "BIBREF13" }, { "start": 858, "end": 882, "text": "(Mohammad et al., 2016a;", "ref_id": "BIBREF11" }, { "start": 883, "end": 907, "text": "Augenstein et al., 2016;", "ref_id": "BIBREF0" }, { "start": 908, "end": 925, "text": "Sun et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Stance Classification: Revealing Risk-Specific Stances", "sec_num": "2.2.3" }, { "text": "To train these classifiers, we minimised the negative log-likelihood of the ground truth label:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Classification: Revealing Risk-Specific Stances", "sec_num": "2.2.3" }, { "text": "L j s = \u2212 N i=1 y j s log C j s (x i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Classification: Revealing Risk-Specific Stances", "sec_num": "2.2.3" }, { "text": "where y j s is the true stance label of the post x i for the jth risk factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Classification: Revealing Risk-Specific Stances", "sec_num": "2.2.3" }, { "text": "The final step in the assessment engine is to quantify the stances derived from the text analytic pipeline to obtain an SLO score, based on the degree of the opinion expressed in each post for an organisation o using the set of relevant posts R o . With all the stance classifiers {C j s } |K| j=1 developed (one for each risk factor), given a post, x, its overall SLO score is derived by averaging over the stances across all risk factors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Score Computation: Transforming Stances into SLO Scores", "sec_num": "2.2.4" }, { "text": "s = 1 |K| |K| j=1 C j s (x) 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Score Computation: Transforming Stances into SLO Scores", "sec_num": "2.2.4" }, { "text": "To produce the final SLO score for an organisation o, we aggregate the SLO scores of all relevant posts R o via averaging:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Score Computation: Transforming Stances into SLO Scores", "sec_num": "2.2.4" }, { "text": "s o = 1 |Ro| x i \u2208Ro s i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Score Computation: Transforming Stances into SLO Scores", "sec_num": "2.2.4" }, { "text": "The changing nature of an organisation's operational context can impact its SLO score. Changes could be due, for example, to a change of a company's CEO, changes in the general trend of the overall market, or a major event. Assessing such changes thus requires that we keep track of not only the time series of a company's SLO scores but also the time series of scores of other companies operating in that market sector. This allows us to see when a company's score departs significantly from the average score across similar organisations, which can be seen as a benchmark of the context/market. Such information can drive strategic action at critical points in time. SIRTA's SLO monitoring engine is designed to track a comparable set of organisations across time. To achieve this, it first obtains the market benchmark by averaging over all organisations' SLO time series. This ensures that the larger or more topical organisations (i.e., the ones that are discussed more often) do not dominate in the comparison. All organisations are thus comparable as they have faced the same market conditions over the same period. Then, the engine seeks to monitor the departure of each organisation's time series from the benchmark over a period of time (e.g., one week), which is computed as follows. Let s t o,i be the ith SLO score of organisation o in period t, and n t o the number of its SLO scores in t, the average SLO score of o in t is then given bys", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Monitoring Engine", "sec_num": "2.3" }, { "text": "t o = n t o i=1 s t o,i /n t o and standard deviation \u03c3 t o = n t o i=1 (s t o,i \u2212s t o ) 2 /n t o .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Monitoring Engine", "sec_num": "2.3" }, { "text": "For organisations with sufficient observations in t, the Shewhart chart (Kan, 2002) with upper control limit (UCL) and lower control limit (UCL) is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Monitoring Engine", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U CL o =s t o + 3\u03c3 t o / n t o and LCL o =s t o \u2212 3\u03c3 t o / n t o", "eq_num": "(2)" } ], "section": "SLO Monitoring Engine", "sec_num": "2.3" }, { "text": "Then a departure of the organisation o from the benchmark in t occurs if the benchmark is below LCL o or above U CL o . For organisations with zero observations in t, we use the exponentially-weighted moving average (EWMA) for the monitoring. Specifically, in period t, we compute the moving average as a t o = 0.05s t o + 0.95a t\u22121 o ifs t o exists, otherwise a t o = a t\u22121 o . Similarly the moving standard deviation is defined as ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Monitoring Engine", "sec_num": "2.3" }, { "text": "v t o = 0.05\u03c3 t o + 0.95v t\u22121 o if \u03c3 t o exists, otherwise v t o = v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SLO Monitoring Engine", "sec_num": "2.3" }, { "text": "We now present how we developed, trained and evaluated the classifiers for the SLO assessment engine. We first created training and test data sets for each task in the text classification pipeline. We used these data sets to train a number of modules, experimenting with several state-of-the-art techniques. Finally, we evaluated the classifiers, in order to choose the best ones to incorporate into SIRTA. Data sets for stance classification. We collected tweets about different mining organisations posted in the country from January 1, 2016 up to 23 October, 2019. We obtained silver standard labels for these tweets with rules that automatically determine the stance labels based on specific meta signals such as hashtags and Twitter account names. While the full set of rules is presented in Appendix, some examples are: 1) favour -a tweet by a mining company-owned account, e.g., adaniaustralia; 2) against -a tweet contains disapproving hashtags, e.g., #stopadani; and 3) neutral -a tweet from known mining-related news sources, e.g., MiningNewsNet. We do not rely on the content of a tweet to determine its label. To test the accuracy of the auto-coding, we randomly sampled 24 tweets from the resulting training set and asked three coders to manually code them as stance for, against or neutral. The coding had a Fleiss Kappa score of 0.71, in the \"substantial agreement\" range, and the majority code from this manual coding matched the auto-coding in all cases. We took this as evidence that the auto-coder based on the simple rules listed above provided largely accurate codings. We note that such silver training set would inevitably contain noise (e.g., a news source account may occasionally post a positive news report about a mining company), but we suspect that this would not harm the performance much (as shown later in our experiments) due to the large scale of the training set. To prepare the test set, we manually created a gold standard dataset by asking three human coders to annotate 274 tweets 2 using specific annotating guidelines (see Appendix). The statistics summary of the training/test sets for this task are shown in Table 1 . Data sets for risk classification. The training set for this task shares the same tweets as in the above task, except that each tweet is now associated with one or more SLO risk labels. As already mentioned, our risk labels were: social, economic, and environmental. We again obtained silver risk labels by using rules on matching the tweet contents with specific keywords (e.g., \"community\" for social, \"environment\" and \"greatbarrierreef\" for environment, and 'jobs' for economic). For the test set, we asked three human coders to annotate 300 tweets. Table 2 shows the statistics summary 3 of the training/test sets. Three-Label Total Train 12,960 13,540 7,405 35,878 56,605 5,654 624 62,883 Test 140 72 61 101 207 84 9 300 Table 2 : Statistics summary of the datasets for SLO category classification. Data set for relevance classification. Finally, we built the training/test sets for the relevance classification task. For the training set, we considered all the tweets used in the stance classification task as relevant, as they were collected with rules for ensuring they were mining-related and informative to stance determination. Then, to get the irrelevant tweets and a balanced data set, we randomly sampled the Twitter stream 4 to obtain the same number of tweets (62,883). The resulting training set contains 125,764 tweets in total (50% relevant and 50% irrelevant). For the test set, as we lacked a gold standard set, 5-fold cross-validation was used instead.", "cite_spans": [], "ref_spans": [ { "start": 2152, "end": 2159, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 2716, "end": 2723, "text": "Table 2", "ref_id": null }, { "start": 2782, "end": 2916, "text": "Three-Label Total Train 12,960 13,540 7,405 35,878 56,605 5,654 624 62,883 Test 140 72 61 101 207 84 9 300 Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Training and Evaluating the SLO Assessment Engine", "sec_num": "3" }, { "text": "We pre-processed the data in the above data sets as follows. For each tweet, tokenisation was done via the CMU Tweet Tagger (Owoputi et al., 2013) , and character elongations were shrunk (e.g., \"yeees\" \u2192 \"yes\"). We removed all hashtags and mentions 5 . We also replaced all URLs, and year, time, cash with place holders (e.g., \"slo url\"). All text were down-cased. Stop words were retained, because of the stance-indicative information they can contain (e.g., \"not\").", "cite_spans": [ { "start": 124, "end": 146, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Classifiers and Training Details", "sec_num": "3.2" }, { "text": "We followed the best practice of training text classification models by implementing four classifiers with state-of-the-art neural network models as the baselines: 1) fastText (Joulin et al., 2017) : an efficient classification model trained on word vectors created with subword information; 2) BiLSTM (Augenstein et al., 2016): a bidirectional LSTM trained on word vectors pretrained with GloVe word embeddings (Pennington et al., 2014) (glove.twitter.27B, 200d) ; 3) CNN (Kim, 2014): a convolutional neural network for sentence classification; 4) BERT (Devlin et al., 2019): a general-purpose pre-training contextual model for sentence encoding and classification.", "cite_spans": [ { "start": 176, "end": 197, "text": "(Joulin et al., 2017)", "ref_id": "BIBREF4" }, { "start": 412, "end": 463, "text": "(Pennington et al., 2014) (glove.twitter.27B, 200d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classifiers and Training Details", "sec_num": "3.2" }, { "text": "The following configurations were used for training the classifiers: 1) fastText: learning rate of 0.1 was used, and the training did not stop until 10 epochs had passed; 2) BiLSTM: the hidden sizes of both LSTM and the followed dense layer were set to 256. A step learning rate scheduler was used, where the learning rate was set to 0.5 initially and then decayed by 10% after each epoch. A dropout layer was placed after the dense layer with a dropout rate of 0.3; 3) CNN: four 1D convolutional layers of 256 filters were chained as the sentence encoder, with the sequential filter sizes as 2, 3, 4, and 5. The same learning rate scheduler and dropout layer as those in BiLSTM were used; 4) BERT: the BERT BASE (uncased) was used. The learning rate was set to 10 \u22125 . The maximum number of wordpieces was set to 128. The batch size for each training step was 16 for BERT (due to GPU memory limits) and 128 for others. Early stopping was applied with a patience of 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers and Training Details", "sec_num": "3.2" }, { "text": "We now report the results of all the text classifiers in SIRTA's SLO assessment engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "Relevance Classification All the classifiers achieved reasonable results on this task, as shown in Table 3, suggesting the distributions of the relevant and irrelevant tweets are easily separable. Among the classifiers, fastText and BiLSTM obtained the highest scores in different training set settings, while BERT, as a cutting-edge modelling tool for text, surprisingly failed to show its potential on this task. This could be caused by the small batch size (16) used for BERT training in order to avoid the out-of-GPU-memory issue; a small batch size usually makes SGD updates less effective on each batch. Another reason behind this could be that the BERT model is over-complex for such an easy task (the classes are easily separable), potentially leading to overfitting. Table 4 shows the classification results. This task is more challenging than the previous one, as evidenced by the generally lower accuracy attained by all classifiers. BERT performed the best across all risk factors, exhibiting its superiority on this more demanding modelling task. However, considering its complexity, the improvements gained by BERT were not proportionally outstanding (2%\u223c2.9%). fastText also performed well, better than both BiLSTM and CNN, demonstrating that it is also a cost-effective choice for this task. The performance on the Environmental risk factor was better than all other factors, which may suggest that it is easier to recognise a post discussing the environmental than one discussing social or economic issues. The performance on the Social risk was the worst. We found that Social samples dominate the multi-label samples in the test set (88.2%); such multi-label posts are harder to classify. As a result, the classifiers make more mistakes on the Social samples. Stance Classification To validate the hypothesis that a risk-aware classifier is more accurate than a generic stance classifier, we compared two experiments: 1) we trained four individual risk-specific stance classifiers using data from the corresponding risk (R), and 2) we trained a single generic stance classifier using data on all risks (not differentiating the risk labels). For both experiments, we split the test set into subsets on different risks. For each risk-specific classifier, we tested it on the respective risk subset. The generic stance classifier was tested on all risk subsets. The results are shown in combinations, although the gains are not necessarily statistically significant in all cases. This validates our hypothesis that the language used to express stances generally varies when people discuss different SLO risk factors, and training specialised stance classifiers for different risks could better capture the underlying risk-specific language variations.", "cite_spans": [], "ref_spans": [ { "start": 776, "end": 783, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "3.3" }, { "text": "SIRTA was built by using Apache Kafka and ELK stack (Elasticsearch, Logstash, and Kibana) for constructing the real-time text classification pipeline in SIRTA's SLO assessment engine. It continuously obtains streaming tweets, which are then fed into the analytics pipeline. For the classifier configuration, based on our evaluation in the previous subsection ( \u00a73.3), we deployed fastText for the relevance classification task and BERT for both the risk and stance classification tasks. The monitoring dashboard was implemented with NodeJS and D3. SIRTA was deployed on a web server with a dockerised form. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Deployment of SIRTA", "sec_num": "3.4" }, { "text": "We have been using SIRTA to monitor a number of major mining companies in the country. Figure 2a shows the chart of the trends and variations of their (EWMA 6 ) SLO scores over a four-year span, from 2016 to 2020, on a weekly basis (averaging over a one-week window). Among these companies, Adani has had consistently the lowest score over time. This is aligned with our observations on Twitter of the numerous campaigns against the company. 7 SIRTA captured these negative opinions and trend over time. Rio Tinto, at the other extreme, had maintained a generally higher SLO profile until very recently, when it destroyed the Juukan Gorge, an ancient Aboriginal sacred site in Australia. 8 BHP also started with a high SLO, on par with Rio Tinto, but we then see a big departure from Rio Tinto in early 2016. This is likely due to the Mariana dam disaster 9 in South America late 2015. 10 Overall, the mean time series of SLO scores (black) was essentially steady until 2018 (although there is a decrease in early 2016, also probably due to the dam disaster), and then decreased significantly, indicating the general public in the country has become more negative about the mining sector overall. This is likely due to increased concerns about the environment and the public awareness of a major project by Adani, with the 'stop adani' movement becoming very active in early 2018.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 96, "text": "Figure 2a", "ref_id": null } ], "eq_spans": [], "section": "Monitoring in Practice", "sec_num": "3.5" }, { "text": "An advantage of SIRTA is its ability to detect SLO changes, thus allowing for prompt mitigating actions being taken. To demonstrate this, we look at two companies, BHP and Rio Tinto. Both experienced significant changes in SLO scores during the monitoring period -See Figure 2b . We again observe the drop in the sector's SLO in 2018. With respect to BHP, there is a sharp departure from Rio Tinto in early 2016, most likely from the Mariana dam disaster, as mentioned above. Another point of particular interest is the sharp decline in Rio Tinto's SLO score in 2020, probably due to the destruction of the sacred site. After that, Rio Tinto and BHP appear converging in their scores, while Rio Tinto was performing much higher earlier. We notice a recent rise in BHP's SLO, potentially because of an announcement to postpone the destruction of other ancient caves until they had a chance to discuss with the community. We note that BHP, Rio Tinto, and Fortescue are essentially Iron Ore companies and have moved out of fossil fuels, while the other companies are related to fossil fuels. The figure shows that Iron Ore companies generally do better than mining companies (when no major event like the destruction of ancient sites or a dam disaster occurs), reinforcing the hypothesis that the downward trend in the mean SLO score is due to climate change concerns about using fossil fuels. To verify our hypothesis that the departure from Rio Tinto in early 2016 is due to the dam collapse, we did a further text analysis on all the tweets about BHP in our database between Jan and March of 2016. The results are shown in Figure 3 . We found that most of those tweets were discussing environmental issues (51%, as opposed to about 25% for both economic and social issues in Figure 3a ) and holding an against stance towards the company (46% in Figure 3b ). We also drew a word cloud of the contents of those tweets (Figure 3c) , which shows the frequent use of words such as \"deadly\", \"dam\", and \"collapse'. All the findings above clearly suggest the occurrence of the negative change of BHP's SLO scores during that period was related (at least partly) to the dam collapsing event.", "cite_spans": [], "ref_spans": [ { "start": 268, "end": 277, "text": "Figure 2b", "ref_id": null }, { "start": 1623, "end": 1631, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1775, "end": 1784, "text": "Figure 3a", "ref_id": "FIGREF1" }, { "start": 1845, "end": 1854, "text": "Figure 3b", "ref_id": "FIGREF1" }, { "start": 1916, "end": 1927, "text": "(Figure 3c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Monitoring in Practice", "sec_num": "3.5" }, { "text": "We contacted Voconiq 11 , a company focused on measuring social license to operate with mining companies in Australia and overseas. Voconiq has pioneered the development of social science tools to provide insights for their clients on their social license to operate. Voconiq employs survey data, unstructured qualitative data, workshops and interviews with community members and company employees living in mining communities. Voconiq collects data about a similar set of companies as SIRTA. We shared our results with the CEO and Co-Founder, Dr Moffat. He told us he believed our work \"added an important piece of research and technical development to the field of SLO research and practice\". Dr Moffat also made some observations about the insights gained from SIRTA, compared to the patterns Voconiq observed. The first observation was the results in response to the Juukan Gorge incident, evident in the Twitter data in Quarter 2, 2020 (Figure 2a ). The patterns observed through SIRTA in terms of company specific patterns of community responding were similar to those observed in his own work utilising data collected from monthly community surveys. Publicly available data regarding community sentiment toward Rio Tinto in Pilbara communities showed a drop in community sentiment corresponding with a similar drop in the SLO scores in SIRTA. Dr Moffat's second observation pointed to the fact that SIRTA and Voconiq \"listen to different voices\", reinforcing our hypothesis that looking at SLO from a social media perspective can provide complementary information to other methods. While large events like the San Marco dam collapse or the Juukan Gorge incident are of a magnitude that affect community sentiment within local mining communities and at a larger societal scale in similar ways, typically the sentiment of community members at these two scales are different. Local communities are more supportive of mining companies typically (often because of the jobs they provide), and they have more realistic understanding of both the benefits and impacts of mining operations. In contrast, data collected at a societal level (e.g., from social media) often reflects a different set of issues and agendas. These differences are evident in divergences Dr Moffat observed when looking at the insights from the Twitter data through SIRTA. He emphasised, however, that this was not a problem but rather a strength of our work. Data from social media provides a unique, and often leading, indicator of community sentiment, allowing companies and other stakeholders to combine these perspectives with those of local community residents for a more three-dimensional (and accurate) understanding of SLO at multiple scales.", "cite_spans": [], "ref_spans": [ { "start": 941, "end": 951, "text": "(Figure 2a", "ref_id": null } ], "eq_spans": [], "section": "Cross-Validation with Survey-based Approaches", "sec_num": "3.6" }, { "text": "Traditionally, organisations determine SLO using surveys and focus groups (Moffat and Zhang, 2014) , which are effective but also expensive to run. Our work seeks to complement these in-depth qualitative methods, harnessing social media to provide a real-time view of social license for organisations and/or specific projects, with early detection of changes -especially downward changes which might need to be addressed through immediate action. It can also inform the design of the focus groups and surveys, by providing information as to current concerns of the public.", "cite_spans": [ { "start": 74, "end": 98, "text": "(Moffat and Zhang, 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Social media monitoring systems have been built to cover social phenomena (Wan and Paris, 2015; Larsen et al., 2015; Joshi et al., 2019) , but none has focused on social license. Yet, this is potentially a very important application of social media analytics, as, increasingly, companies need to ensure they have such license, either as an organisation as a whole, or for specific projects they intend to carry out. In addition, an important aspect of our work is to couple the text analytics with a statistical monitoring engine to ensure insights from the text are appropriately put into an overall historical and sector contexts.", "cite_spans": [ { "start": 74, "end": 95, "text": "(Wan and Paris, 2015;", "ref_id": "BIBREF20" }, { "start": 96, "end": 116, "text": "Larsen et al., 2015;", "ref_id": "BIBREF7" }, { "start": 117, "end": 136, "text": "Joshi et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Stance detection in social media has gained much attention in recent years. The SemEval-2016 Task 6 challenge (Mohammad et al., 2016b) focused on stance classification of tweets discussing controversial political positions (e.g., abortion and climate change) and opposing political candidates (e.g., Clinton and Trump) (Mohammad et al., 2016b) . Following this work, we focus on stance and code tweet instances as stance-for, against and neutral with respect to target companies. Note that, in our application domain, the zero-sum context inherent in the political domain used for SemEval-2016 does not apply; rejection of one company doesn't necessarily imply support of other companies. Indeed, in our specific case, tweet authors with environmentalist inclinations tend to reject all mining companies.", "cite_spans": [ { "start": 110, "end": 134, "text": "(Mohammad et al., 2016b)", "ref_id": "BIBREF12" }, { "start": 319, "end": 343, "text": "(Mohammad et al., 2016b)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, we present SIRTA, a novel real-time text analytic system coupled with sophisticated monitoring techniques to help organisations manage their social license to operate (SLO) over time. Our experimental results and a case study show its effectiveness and applicability. The work could be furthered in a number of directions. First, our multi-label risk classifier currently does not consider the potential correlations among the risk factors mentioned in a post. It might be helpful to examine whether these correlations exist, and, if they do, refine the model. Second, our current strategy for aggregating the SLO scores of individual posts treats each post equally. We are considering employing a weighting function instead. Third, over time, the underlying distributions of the social media posts may shift, and our text classification models may need to be updated requiring a retraining policy. Finally, we plan to extend SIRTA to other organisations, sector or technology and extend our text analytics to support languages other than English (such as Korean and Japanese).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Company partnerships with schools, hospitals, communities, that relates to funded programs or infrastructure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Any danger to valued sacred sites or cultural artefacts, e.g., art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Any dangers to the cultural way of life for local inhabitants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Any community issue that surfaces in relation to mine operations or management.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Protests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "(3) Select Environmental if the message is related to natural environment, including fauna, flora, water, air, climate, etc. Examples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Discussions about the impact on the natural environment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 Comments on environmental impact of the company, its decisions and actions. CoalMines, StopAdani, StopAdaniNoon, StopAdaniMelbs, StopAdaniK, StopadaniB, stopadanieltham, StopAdaniTSV, stopadanisydney, StopadaniGC, StopAdaniCairns, StopadaniW, StopAdaniGTown, stopadaninoosa, stopadanibowen, adani stop, StopBHP Neutral 19317 MiningNewsNet, ozmining, miningcomau, MiningEnergySA, AUMiningMonthly, Miner-alsCouncil, Austmine, MiningWeeklyAUS, AuMiningReview", "cite_spans": [ { "start": 78, "end": 326, "text": "CoalMines, StopAdani, StopAdaniNoon, StopAdaniMelbs, StopAdaniK, StopadaniB, stopadanieltham, StopAdaniTSV, stopadanisydney, StopadaniGC, StopAdaniCairns, StopadaniW, StopAdaniGTown, stopadaninoosa, stopadanibowen, adani stop, StopBHP Neutral 19317", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Guidelines for Manual Test Set Annotation: for: The coder infers from the tweet and its context that the author supports the target either because:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 the tweet explicitly supports the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 the tweet supports something/someone else aligned with or supporting the target or rejects something/someone else not aligned with or supporting the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 the tweet can be seen, in context, to support the target, either because:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "the tweet author's profile lists positions consistent with support of the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "the tweet discourse context places the tweet in support the target either by echoing support for the target in other tweets or by opposing rejection for the target in other tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "against: The coder infers from the tweet and its context that the author rejects the target. neutral: The coder infers from the tweet or its context that the author neither supports nor rejects the target because:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 the tweet states no position consistent with support or rejection of the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 the tweet re-posts information only, with no clear hint as to the author's stance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "\u2022 the tweet context gives no hints as to the tweet author's stance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The stance of an absent risk factor will not be included in the summation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A Fleiss Kappa score of 0.88.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More details of the rules/guidelines for the silver/gold label acquisition are in Appendix.4 https://developer.twitter.com/en/docs/labs/sampled-stream/api-reference/get-tweets-stream-sample-v15 The removal of all hashtags/mentions allows us to train models that generalise and are not specific to the seen hashtags/mentions in the training data. The removal of the hashtags is also because some of them were already used for obtaining the silver standard labels of the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Exponentially-Weighted Moving Average, as a measure to account for data scarcity cases.7 These campaigns often have the textual prefix \"StopAdani\" in their Twitter account names. 8 See news articles: www.business-humanrights.org/en/australia-rio-tinto-mining-blast-destroys-ancient-aboriginal-sacredsite and www.ft.com/content/6db79b46-8e46-4e89-8688-97064effbc61 -accessed June 24th, 2020.9 https://en.wikipedia.org/wiki/Mariana dam disaster 10 Unfortunately, we lack the data before 1st Jan, 2016.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://voconiq.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " StopAdani, StopAdaniNoon, StopAdaniMelbs, StopAdaniK, StopadaniB, stopadanieltham, StopAdan-iTSV, stopadanisydney, StopadaniGC, StopAdaniCairns, StopadaniW, StopAdaniGTown, stopadaninoosa, stopadanibowen, adani stop, StopBHP, AdaniOnline, bhp, RioTinto, SantosLtd, FortescueNews, kennecottutah, NSWMC, CMEWA, QRCouncil, WoodsideEnergy, MiningNewsNet, ozmining Guidelines for Manual Test Set Annotation:(1) Select Economic if the message is about the economic value of the company or its production, about shareholders, or about any employment/staff related issues (hiring, firing, Health and Safety). Examples:\u2022 share price movements, or an indication that the company is doing well/bad.\u2022 jobs (new hires(+ve) and cuts(-ve)). (e.g., \"[company]\" is recruiting/laying off\")\u2022 big economic wins.\u2022 Positive or negative movement in terms of commodity price (e.g., iron price per ton).\u2022 Positive/negative economic forecasts for the industry/company.\u2022 Mentions of shareholders.\u2022 Health and safety matters or working conditions.(2) Select Social and cultural if the focus of the message is about how the company interacts with the community and how its activity affects the community. Sample topics include: health and education, community support services, social engagement with government, the cultural value of a site (e.g., sacred sites) used by the organisation. Any protest activity or activity trying rally for a cause is taken as \"Social and Cultura\" (this does not include shareholders revolt, which would be \"Economic and Employment\"). Examples:\u2022 Government calls for a company to be investigated.\u2022 Wining and dining government officials.\u2022 Any interactions between the government and the mining company.", "cite_spans": [ { "start": 1, "end": 360, "text": "StopAdani, StopAdaniNoon, StopAdaniMelbs, StopAdaniK, StopadaniB, stopadanieltham, StopAdan-iTSV, stopadanisydney, StopadaniGC, StopAdaniCairns, StopadaniW, StopAdaniGTown, stopadaninoosa, stopadanibowen, adani stop, StopBHP, AdaniOnline, bhp, RioTinto, SantosLtd, FortescueNews, kennecottutah, NSWMC, CMEWA, QRCouncil, WoodsideEnergy, MiningNewsNet, ozmining", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stance detection with bidirectional conditional encoding", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.05464" ] }, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Tim Rockt\u00e4schel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. arXiv preprint arXiv:1606.05464.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Social License and Environmental Protection: Why Businesses Go beyond Compliance", "authors": [ { "first": "Neil", "middle": [], "last": "Gunningham", "suffix": "" }, { "first": "Robert", "middle": [ "A" ], "last": "Kagan", "suffix": "" }, { "first": "Dorothy", "middle": [], "last": "Thornton", "suffix": "" } ], "year": 2004, "venue": "Journal of the American Bar Foundation", "volume": "39", "issue": "2", "pages": "307--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neil Gunningham, Robert A. Kagan, and Dorothy Thornton. 2004. Social License and Environmental Protection: Why Businesses Go beyond Compliance. Journal of the American Bar Foundation, 39(2):307-341.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Harnessing tweets for early detection of an acute disease event", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Sparks", "suffix": "" }, { "first": "James", "middle": [], "last": "Mchugh", "suffix": "" }, { "first": "Sarvnaz", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Cecile", "middle": [], "last": "Paris", "suffix": "" }, { "first": "Raina", "middle": [], "last": "Macintyre", "suffix": "" } ], "year": 2019, "venue": "Epidemiology", "volume": "31", "issue": "", "pages": "90--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Ross Sparks, James McHugh, Sarvnaz Karimi, Cecile Paris, and Raina MacIntyre. 2019. Harnessing tweets for early detection of an acute disease event. Epidemiology, 31:90 -97.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text clas- sification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431. Association for Computational Linguistics, April.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Metrics and models in software quality engineering", "authors": [ { "first": "H", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen H Kan. 2002. Metrics and models in software quality engineering. Addison-Wesley Longman Publishing Co., Inc.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar, October. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "We feel: Mapping emotions on Twitter", "authors": [ { "first": "M", "middle": [], "last": "Larsen", "suffix": "" }, { "first": "T", "middle": [ "T" ], "last": "Boonstra", "suffix": "" }, { "first": "P", "middle": [], "last": "Batterham", "suffix": "" }, { "first": "B", "middle": [ "B" ], "last": "O'dea", "suffix": "" }, { "first": "C", "middle": [], "last": "Paris", "suffix": "" }, { "first": "H", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 2015, "venue": "IEEE Journal of Biomedical and Health Informatics (JBHI)", "volume": "9", "issue": "", "pages": "1246--1252", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Larsen, T. T. Boonstra, P. Batterham, B. B. O'Dea, C.Paris, and H. Christensen. 2015. We feel: Mapping emotions on Twitter. IEEE Journal of Biomedical and Health Informatics (JBHI), 9:1246 -1252.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Synthesis lectures on human language technologies", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2012, "venue": "", "volume": "5", "issue": "", "pages": "1--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1):1-167.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The paths to social licence to operate: An integrative model explaining community acceptance of mining", "authors": [ { "first": "Kieren", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Airong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "Resources Policy", "volume": "39", "issue": "1", "pages": "61--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kieren Moffat and Airong Zhang. 2014. The paths to social licence to operate: An integrative model explaining community acceptance of mining. Resources Policy, 39(1):61-70.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The social licence to operate: a critical review", "authors": [ { "first": "Kieren", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Justine", "middle": [], "last": "Lacey", "suffix": "" }, { "first": "Airong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sina", "middle": [], "last": "Leipold", "suffix": "" } ], "year": 2016, "venue": "Forestry", "volume": "89", "issue": "", "pages": "477--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kieren Moffat, Justine Lacey, Airong Zhang, and Sina Leipold. 2016. The social licence to operate: a critical review. Forestry, 89:477-488.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Semeval-2016 task 6: Detecting stance in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "31--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016a. Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31-41.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SemEval-2016 Task 6: Detecting stance in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Swetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "31--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Swetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016b. SemEval- 2016 Task 6: Detecting stance in tweets. In Proceedings of the International Workshop on Semantic Evaluation, pages 31-41. ACM, jun. https://www.aclweb.org/anthology/S/S16/S16-1003.pdf.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stance and sentiment in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2017, "venue": "ACM Transactions on Internet Technology (TOIT)", "volume": "17", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved part-of-speech tagging for online conversational text with word clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT 2013", "volume": "", "issue": "", "pages": "380--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL- HLT 2013, pages 380-390. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Opinion mining and sentiment analysis. Foundations and Trends R in Information Retrieval", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "2", "issue": "", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R in Informa- tion Retrieval, 2(1-2):1-135.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word repre- sentation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Detecting stance in tweets and analyzing its interaction with sentiment", "authors": [ { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "159--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parinaz Sobhani, Saif Mohammad, and Svetlana Kiritchenko. 2016. Detecting stance in tweets and analyzing its interaction with sentiment. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 159-169.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Stance detection with hierarchical attention network", "authors": [ { "first": "Qingying", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhongqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2399--2409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierarchical attention network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2399-2409.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Grigorios Tsoumakas and Ioannis Katakis", "authors": [], "year": 2007, "venue": "International Journal of Data Warehousing and Mining (IJDWM)", "volume": "3", "issue": "3", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grigorios Tsoumakas and Ioannis Katakis. 2007. Multi-label classification: An overview. International Journal of Data Warehousing and Mining (IJDWM), 3(3):1-13.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Understanding public emotional reactions on Twitter", "authors": [ { "first": "Stephen", "middle": [], "last": "Wan", "suffix": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Paris", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Ninth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "715--716", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Wan and C\u00e9cile Paris. 2015. Understanding public emotional reactions on Twitter. In Proceedings of the Ninth International AAAI Conference on Web and Social Media, pages 715-716. AAAI.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "(a) SLO trends and variations of the monitored mining companies (b) A case study of detection of SLO score changes in relation to BHP and Rio TintoFigure 2: The monitoring of SLO scores of seven major mining companies in the country.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Further analysis on the tweets posted between Jan and March 2016 about BHP.", "num": null, "type_str": "figure" }, "TABREF2": { "num": null, "content": "", "text": "Statistics summary of the datasets for SLO stance classification.", "type_str": "table", "html": null }, "TABREF3": { "num": null, "content": "
%Train fastText BiLSTMCNNBERT
25%95.
", "text": "9\u00b10.9 94.5\u00b11.0 93.5\u00b11.5 89.7\u00b11.0 50% 96.0\u00b10.9 94.9\u00b12.4 93.9\u00b11.5 91.3\u00b11.4 75% 96.1\u00b10.9 96.0\u00b11.1 93.9\u00b11.5 92.9\u00b12.1 100% 96.2\u00b10.9 96.2\u00b11.1 94.3\u00b11.4 93.2\u00b11.7Table 3: Accuracy on relevance classification.", "type_str": "table", "html": null }, "TABREF5": { "num": null, "content": "
SLO RiskfastTextBiLSTMCNNBERT
Social70.8\u00b11.376.1\u00b11.375.9\u00b10.974.7\u00b12.0
Social (R)71.3\u00b12.6 (0.5)76.9\u00b13.7 (0.8)77.7\u00b12.6 (1.8)76.2\u00b12.2 (1.5 * * * )
Economic59.6\u00b12.663.1\u00b12.562.2\u00b12.466.3\u00b13.8
Economic (R)62.2\u00b12.5 (2.6)68.4\u00b12.0 (5.3 * * * ) 66.5\u00b12.7 (4.3 * * )71.2\u00b11.5 (4.9 * * )
Environmental61.5\u00b12.267.0\u00b14.067.5\u00b14.569.4\u00b12.7
Environmental (R) 68.0\u00b12.9 (6.5 * * * )68.4\u00b11.2 (1.4)68.0\u00b14.0 (0.5)72.5\u00b11.6 (3.2)
Other49.1\u00b12.753.0\u00b13.455.9\u00b11.758.1\u00b11.0
Other (R)56.7\u00b11.4 (7.6 * * * )57.5\u00b12.7 (4.5 * )56.3\u00b11.3 (0.4)61.1\u00b14.9 (3.0)
Overall58.5\u00b11.564.8\u00b12.865.4\u00b12.467.2\u00b11.9
Overall (R)64.3\u00b12.0 (5.8 * * )67.8\u00b12.4 (3.0)67.1\u00b12.7 (1.7)71.7\u00b14.0 (4.5)
(Two-tailed t-test: * * * p < 0.01; * * p < 0.05;
", "text": ", where we observe the risk-aware stance classifiers provided performance gains across all risk-classifier", "type_str": "table", "html": null }, "TABREF6": { "num": null, "content": "", "text": "Accuracy on stance classification. Performance gains of the risk-aware methods over the corresponding non-risk ones are shown in the parentheses.", "type_str": "table", "html": null } } } }