xml
stringlengths
7
23
proceedings
stringlengths
58
222
year
stringdate
2005-01-01 00:00:00
2024-01-01 00:00:00
url
stringlengths
1
64
language documentation
stringclasses
1 value
has non-English?
stringclasses
4 values
topics
stringclasses
7 values
language coverage
stringclasses
32 values
title
stringlengths
32
161
abstract
stringlengths
176
2.45k
S18.xml
Proceedings of the 12th International Workshop on Semantic Evaluation
2018
https://aclanthology.org/S18-1144
x
0
jailbreaking attacks
null
Flytxt_NTNU at SemEval-2018 Task 8: Identifying and Classifying Malware Text Using Conditional Random Fields and Naïve Bayes Classifiers
Cybersecurity risks such as malware threaten the personal safety of users, but to identify malware text is a major challenge. The paper proposes a supervised learning approach to identifying malware sentences given a document (subTask1 of SemEval 2018, Task 8), as well as to classifying malware tokens in the sentences (subTask2). The approach achieved good results, ranking second of twelve participants for both subtasks, with F-scores of 57% for subTask1 and 28% for subTask2.
W18.xml
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (
2018
https://aclanthology.org/W18-0801
null
null
general safety, LLM alignment
null
On the Utility of Lay Summaries and AI Safety Disclosures: Toward Robust, Open Research Oversight
In this position paper, we propose that the community consider encouraging researchers to include two riders, a “Lay Summary” and an “AI Safety Disclosure”, as part of future NLP papers published in ACL forums that present user-facing systems. The goal is to encourage researchers–via a relatively non-intrusive mechanism–to consider the societal implications of technologies carrying (un)known and/or (un)knowable long-term risks, to highlight failure cases, and to provide a mechanism by which the general public (and scientists in other disciplines) can more readily engage in the discussion in an informed manner. This simple proposal requires minimal additional up-front costs for researchers; the lay summary, at least, has significant precedence in the medical literature and other areas of science; and the proposal is aimed to supplement, rather than replace, existing approaches for encouraging researchers to consider the ethical implications of their work, such as those of the Collaborative Institutional Training Initiative (CITI) Program and institutional review boards (IRBs).
W18.xml
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (
2018
https://aclanthology.org/W18-5207
null
null
others
null
Annotating Claims in the Vaccination Debate
In this paper we present annotation experiments with three different annotation schemes for the identification of argument components in texts related to the vaccination debate. Identifying claims about vaccinations made by participants in the debate is of great societal interest, as the decision to vaccinate or not has impact in public health and safety. Since most corpora that have been annotated with argumentation information contain texts that belong to a specific genre and have a well defined argumentation structure, we needed to adjust the annotation schemes to our corpus, which contains heterogeneous texts from the Web. We started with a complex annotation scheme that had to be simplified due to low IAA. In our final experiment, which focused on annotating claims, annotators reached 57.3% IAA.
W17.xml
Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages
2017
https://aclanthology.org/W17-2316
null
null
others
null
Detecting Personal Medication Intake in Twitter: An Annotated Corpus and Baseline Classification System
Social media sites (e.g., Twitter) have been used for surveillance of drug safety at the population level, but studies that focus on the effects of medications on specific sets of individuals have had to rely on other sources of data. Mining social media data for this in-formation would require the ability to distinguish indications of personal medication in-take in this media. Towards that end, this paper presents an annotated corpus that can be used to train machine learning systems to determine whether a tweet that mentions a medication indicates that the individual posting has taken that medication at a specific time. To demonstrate the utility of the corpus as a training set, we present baseline results of supervised classification.
D17.xml
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
2017
https://aclanthology.org/D17-1234
null
null
others
null
Affordable On-line Dialogue Policy Learning
The key to building an evolvable dialogue system in real-world scenarios is to ensure an affordable on-line dialogue policy learning, which requires the on-line learning process to be safe, efficient and economical. But in reality, due to the scarcity of real interaction data, the dialogue system usually grows slowly. Besides, the poor initial dialogue policy easily leads to bad user experience and incurs a failure of attracting users to contribute training data, so that the learning process is unsustainable. To accurately depict this, two quantitative metrics are proposed to assess safety and efficiency issues. For solving the unsustainable learning problem, we proposed a complete companion teaching framework incorporating the guidance from the human teacher. Since the human teaching is expensive, we compared various teaching schemes answering the question how and when to teach, to economically utilize teaching budget, so that make the online learning process affordable.
D17.xml
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
2017
https://aclanthology.org/D17-1260
null
null
others
null
Agent-Aware Dropout DQN for Safe and Efficient On-line Dialogue Policy Learning
Hand-crafted rules and reinforcement learning (RL) are two popular choices to obtain dialogue policy. The rule-based policy is often reliable within predefined scope but not self-adaptable, whereas RL is evolvable with data but often suffers from a bad initial performance. We employ a companion learning framework to integrate the two approaches for on-line dialogue policy learning, in which a pre-defined rule-based policy acts as a “teacher” and guides a data-driven RL system by giving example actions as well as additional rewards. A novel agent-aware dropout Deep Q-Network (AAD-DQN) is proposed to address the problem of when to consult the teacher and how to learn from the teacher’s experiences. AAD-DQN, as a data-driven student policy, provides (1) two separate experience memories for student and teacher, (2) an uncertainty estimated by dropout to control the timing of consultation and learning. Simulation experiments showed that the proposed approach can significantly improve both safety and efficiency of on-line policy optimization compared to other companion learning approaches as well as supervised pre-training using static dialogue corpus.
L16.xml
Proceedings of the Tenth International Conference on Language Resources and Evaluation (
2016
https://aclanthology.org/L16-1185
null
null
others
null
Detecting Implicit Expressions of Affect from Text using Semantic Knowledge on Common Concept Properties
Emotions are an important part of the human experience. They are responsible for the adaptation and integration in the environment, offering, most of the time together with the cognitive system, the appropriate responses to stimuli in the environment. As such, they are an important component in decision-making processes. In today’s society, the avalanche of stimuli present in the environment (physical or virtual) makes people more prone to respond to stronger affective stimuli (i.e., those that are related to their basic needs and motivations ― survival, food, shelter, etc.). In media reporting, this is translated in the use of arguments (factual data) that are known to trigger specific (strong, affective) behavioural reactions from the readers. This paper describes initial efforts to detect such arguments from text, based on the properties of concepts. The final system able to retrieve and label this type of data from the news in traditional and social platforms is intended to be integrated Europe Media Monitor family of applications to detect texts that trigger certain (especially negative) reactions from the public, with consequences on citizen safety and security.
L12.xml
Proceedings of the Eighth International Conference on Language Resources and Evaluation (
2012
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1008_Paper.pdf
null
null
others
null
Foundations of a Multilayer Annotation Framework for Twitter Communications During Crisis Events
In times of mass emergency, vast amounts of data are generated via computer-mediated communication (CMC) that are difficult to manually collect and organize into a coherent picture. Yet valuable information is broadcast, and can provide useful insight into time- and safety-critical situations if captured and analyzed efficiently and effectively. We describe a natural language processing component of the EPIC (Empowering the Public with Information in Crisis) Project infrastructure, designed to extract linguistic and behavioral information from tweet text to aid in the task of information integration. The system incorporates linguistic annotation, in the form of Named Entity Tagging, as well as behavioral annotations to capture tweets contributing to situational awareness and analyze the information type of the tweet content. We show classification results and describe future integration of these classifiers in the larger EPIC infrastructure.
L10.xml
Proceedings of the Seventh International Conference on Language Resources and Evaluation (
2010
http://www.lrec-conf.org/proceedings/lrec2010/pdf/137_Paper.pdf
null
null
others
null
Resources for Controlled Languages for Alert Messages and Protocols in the European Perspective
This paper is concerned with resources for controlled languages for alert messages and protocols in the European perspective. These resources have been produced as the outcome of a project (Alert Messages and Protocols: MESSAGE) which has been funded with the support of the European Commission - Directorate-General Justice, Freedom and Security, and with the specific objective of 'promoting and supporting the development of security standards, and an exchange of know-how and experience on protection of people'. The MESSAGE project involved the development and transfer of a methodology for writing safe and safely translatable alert messages and protocols created by Centre Tesnière in collaboration with the aircraft industry, the health profession, and emergency services by means of a consortium of four partners to their four European member states in their languages (ES, FR (Coordinator), GB, PL). The paper describes alert messages and protocols, controlled languages for safety and security, the target groups involved, controlled language evaluation, dissemination, the resources that are available, both “Freely available” and “From Owner”, together with illustrations of the resources, and the potential transferability to other sectors and users.
2005.mtsummit.xml
Proceedings of Machine Translation Summit X: Invited papers
2005
https://aclanthology.org/2005.mtsummit-invited.4
null
null
others
null
Global Public Health Intelligence Network (GPHIN)
Accurate and timely information on global public health issues is key to being able to quickly assess and respond to emerging health risks around the world. The Public Health Agency of Canada has developed the Global Public Health Intelligence Network (GPHIN). Information from GPHIN is provided to the WHO, international governments and non-governmental organizations who can then quickly react to public health incidents. GPHIN is a secure Internet-based “early warning” system that gathers preliminary reports of public health significance on a “real-time” basis, 24 hours a day, 7 days a week. This unique multilingual system gathers and disseminates relevant information on disease outbreaks and other public health events by monitoring global media sources such as news wires and web sites. This monitoring is done in eight languages with machine translation being used to translate non-English articles into English and English articles into the other languages. The information is filtered for relevancy by an automated process which is then complemented by human analysis. The output is categorized and made accessible to users. Notifications about public health events that may have serious public health consequences are immediately forwarded to users. GPHIN employs a “best-of-breed” approach when it comes to the selection of the machine translation ‘engines’. This philosophy ensures that the quality of the machine translation is the best available for whatever language pair selected. It also imposes some unique integration and operational problems. GPHIN has a broad scope. It tracks events such as disease outbreaks, infectious diseases, contaminated food and water, bio-terrorism and exposure to chemicals, natural disasters, and issues related to the safety of products, drugs and medical devices. GPHIN is managed by Health Canada’s Centre for Emergency Preparedness and Response (CEPR), which was created in July 2000 to serve as Canada’s central coordinating point for public health security. It is considered a centre of expertise in the area of civic emergencies including natural disasters and malicious acts with health repercussions. CEPR offers a number of practical supports to municipalities, provinces and territories, and other partners involved in first response and public health security. This is achieved through its network of public health, emergency health services, and emergency social services contacts.