ACL-OCL / Base_JSON /prefixT /json /trustnlp /2021.trustnlp-1.0.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:37.719509Z"
},
"title": "",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Munro",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Recent progress in Artificial Intelligence (AI) and Natural Language Processing (NLP) has greatly increased their presence in everyday consumer products in the last decade. Common examples include virtual assistants, recommendation systems, and personal healthcare management systems, among others. Advancements in these fields have historically been driven by the goal of improving model performance as measured by accuracy, but recently the NLP research community has started incorporating additional constraints to make sure models are fair and privacy-preserving. However, these constraints are not often considered together, which is important since there are critical questions at the intersection of these constraints such as the tension between simultaneously meeting privacy objectives and fairness objectives, which requires knowledge about the demographics a user belongs to. In this workshop, we aim to bring together these distinct yet closely related topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "We invited papers which focus on developing models that are \"explainable, fair, privacy-preserving, causal, and robust\" (Trustworthy ML Initiative). Topics of interest include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "\u2022 Differential Privacy ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
}
],
"back_matter": [
{
"text": "June 10, 2021 9:00-9:10Opening Organizers 9:10-10:00 Keynote 1 Richard Zemel 10:00-11:00 Paper Presentations ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conference Program",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Paper Presentations xER: An Explainable Model for Entity Resolution using an Efficient Solution for the Clique Partitioning Problem Samhita Vadrevu, Rakesh Nagi, JinJun Xiong and Wen-mei Hwu Gender Bias in Natural Language Processing Across Human Languages Abigail Matthews",
"authors": [
{
"first": "Isabella",
"middle": [],
"last": "Grasso",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Mahoney",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Esma",
"middle": [],
"last": "Wali",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Middleton",
"suffix": ""
}
],
"year": 2021,
"venue": "Lijun Lyu, Ujwal Gadiraju and Avishek Anand",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "June 10, 2021 (continued) 11:15-12:15 Paper Presentations xER: An Explainable Model for Entity Resolution using an Efficient Solution for the Clique Partitioning Problem Samhita Vadrevu, Rakesh Nagi, JinJun Xiong and Wen-mei Hwu Gender Bias in Natural Language Processing Across Human Languages Abigail Matthews, Isabella Grasso, Christopher Mahoney, Yan Chen, Esma Wali, Thomas Middleton, Mariama Njie and Jeanna Matthews Interpreting Text Classifiers by Learning Context-sensitive Influence of Words Sawan Kumar, Kalpit Dixit and Kashif Shah Towards Benchmarking the Utility of Explanations for Model Debugging Maximilian Idahl, Lijun Lyu, Ujwal Gadiraju and Avishek Anand 12:15-1:30 Lunch Break 13:00-14:00 Mentorship Meeting 14:00-14:50 Keynote 2 Mandy Korpusik 14:50-15:00 Break 15:00-16:00 Poster Session",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"type_str": "table",
"text": "In total, we accepted 11 papers, including 2 non-archival papers. We hope all the attendants enjoy this workshop.",
"content": "<table><tr><td colspan=\"2\">Organizing Committee \u2022 Hila Gonen -Bar-Ilan University</td></tr><tr><td colspan=\"2\">\u2022 Patricia Thaine -University of Toronto \u2022 Yada Pruksachatkun -Alexa AI \u2022 Jamie Hayes -Google DeepMind, University College London, UK \u2022 Anil Ramakrishna -Alexa AI \u2022 Emily Sheng -University of California Los Angeles \u2022 Kai-Wei Chang -UCLA, Amazon Visiting Academic \u2022 Isar Nejadgholi -National Research Council Canada \u2022 Satyapriya Krishna -Alexa AI \u2022 Jwala Dhamala -Alexa AI \u2022 Anthony Rios -University of Texas at San Antonio</td></tr><tr><td colspan=\"2\">\u2022 Tanaya Guha -University of Warwick</td></tr><tr><td>\u2022 Xiang Ren -USC</td><td/></tr><tr><td/><td>Speakers</td></tr><tr><td colspan=\"2\">\u2022 Mandy Korpusik -Assistant professor, Loyola Marymount University \u2022 Fairness and Bias: Evaluation and Treatments \u2022 Richard Zemel -Industrial Research Chair in Machine Learning, University of Toronto \u2022 Model Explainability and Interpretability \u2022 Accountability \u2022 Robert Monarch -Author, Human-in-the-Loop Machine Learning</td></tr><tr><td>\u2022 Ethics</td><td>Program committee</td></tr><tr><td colspan=\"2\">\u2022 Industry applications of Trustworthy NLP \u2022 Rahul Gupta -Alexa AI \u2022 Causal Inference \u2022 Willie Boag -Massachusetts Institute of Technology \u2022 Secure and trustworthy data generation \u2022 Naveen Kumar -Disney Research</td></tr><tr><td colspan=\"2\">\u2022 Nikita Nangia -New York University</td></tr><tr><td>\u2022 He He -New York University</td><td/></tr><tr><td colspan=\"2\">\u2022 Jieyu Zhao -University of California Los Angeles</td></tr><tr><td colspan=\"2\">\u2022 Nanyun Peng -University of California Los Angeles</td></tr><tr><td>\u2022 Spandana Gella -Alexa AI</td><td/></tr><tr><td colspan=\"2\">\u2022 Moin Nadeem -Massachusetts Institute of Technology</td></tr><tr><td colspan=\"2\">\u2022 Maarten Sap -University of Washington</td></tr><tr><td colspan=\"2\">\u2022 Tianlu Wang -University of Virginia</td></tr><tr><td colspan=\"2\">\u2022 William Wang -University of Santa Barbara</td></tr><tr><td colspan=\"2\">\u2022 Joe Near -University of Vermont</td></tr><tr><td>\u2022 David Darais -Galois</td><td/></tr><tr><td colspan=\"2\">\u2022 Pratik Gajane -Department of Computer Science, Montanuniversitat Leoben, Austria</td></tr><tr><td colspan=\"2\">\u2022 Paul Pu Liang -Carnegie Mellon University</td></tr><tr><td/><td>v vi</td></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder Zheng Tang and Mihai Surdeanu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Measuring Biases of Word Embeddings: What Similarity Measures and Descriptive Statistics to Use? Hossein Azarpanah and Mohsen Farhadloo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Private Release of Text Embedding Vectors Oluwaseyi Feyisetan and Shiva Kasiviswanathan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Accountable Error Characterization Amita Misra, Zhe Liu and Jalal Mahmud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 xER: An Explainable Model for Entity Resolution using an Efficient Solution for the Clique Partitioning Problem Samhita Vadrevu, Rakesh Nagi, JinJun Xiong and Wen-mei Hwu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Gender Bias in Natural Language Processing Across Human Languages Abigail Matthews, Isabella Grasso, Christopher Mahoney, Yan Chen, Esma Wali, Thomas Middleton, Mariama Njie and Jeanna Matthews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Interpreting Text Classifiers by Learning Context-sensitive Influence of Words Sawan Kumar, Kalpit Dixit and Kashif Shah . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Towards Benchmarking the Utility of Explanations for Model Debugging Maximilian Idahl, Lijun Lyu, Ujwal Gadiraju and Avishek Anand . . . . . . . . . . . . . . . . . . . . . . . . . . . 68",
"content": "<table/>",
"num": null
}
}
}
}