ACL-OCL / Base_JSON /prefixH /json /hcinlp /2021.hcinlp-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:30.837833Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Provocation: Contestability in Large-Scale Interactive NLP Systems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Information School University of Washington [email protected]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tanushree Mitra",
"sec_num": null
},
{
"text": "Designing computational systems for language analysis means, increasingly, designing for interactions with natural language processing (NLP) algorithms. For example, sentiment analysis, topic modeling, toxicity classification, and other language modeling techniques have become common in interactive user-facing systems. These developments raise novel, complex challenges, both in terms of designing such systems and user's interactions with them (Baumer et al., 2020) . Accordingly researchers have advocated for rethinking interactive ML systems that involve users at every stage of the system (Amershi et al., 2014) . For example, Horvitz' principles for effective \"mixedinitiative\" systems include querying users about goals and preferences and scoping system precision to match users' needs (Horvitz, 1999) . Other approaches include value-centered design (Knobel and Bowker, 2011), transparency in design (Ananny and Crawford, 2018) and more recently contestability (Hirsch et al., 2017) . Despite these various proposed approaches for ML-based systems, situating the discussion in the context of interactive NLP systems, in particular, has remained elusive. Each of these design principles (mixedinitiative, value-centered, transparency, contestability) can claim it's own spot for a full-day workshop discussion. Hence, to scope the conversation for this workshop, we will primarily focus on contestability in NLP systems in this provocation piece.",
"cite_spans": [
{
"start": 447,
"end": 468,
"text": "(Baumer et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 596,
"end": 618,
"text": "(Amershi et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 796,
"end": 811,
"text": "(Horvitz, 1999)",
"ref_id": "BIBREF15"
},
{
"start": 911,
"end": 938,
"text": "(Ananny and Crawford, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 972,
"end": 993,
"text": "(Hirsch et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "1"
},
{
"text": "The notion of contestability originates from the need for contesting or challenging machine predictions (Hirsch et al., 2017; Kluttz et al., 2018) . For example, contestability in recidivism prediction systems or health behavior prediction systems. Here we argue the need for exploring contestability in large-scale online systems and in particular NLP systems. With respect to language-based systems, consider the Perspective API developed by Google's Counter-Abuse Technology team to identify toxic language in text. The technology has now been integrated within the New York Times comment interface to facilitate largescale moderation of potentially toxic and obscene comments on news stories. However, the same technology has also incorrectly discovered a positive correlation between identity terms containing information on race or sexual orientation (e.g., the phrase \"I am a gay black woman\" received a high toxicity score) (developers.google.com). But we do not yet have an established way to contest these decisions made by NLP algorithms integrated within large socio-technical systems (in this case a news platforms with widespread readership). Perhaps the closest form of contestability research in large-scale online systems relates to efforts around auditing algorithmic systems for bias (Bozdag, 2013; Chen et al., 2018) , discrimination (Chen et al., 2016; Hannak et al., 2014; Mikians et al., 2012) , and fairness (Dwork et al., 2012) . While audit studies is a way to detect undesirable behavior in large-scale \"black-boxed\" Web systems, allowing users to meaningfully contest such undesirable behavior will offer ways to help people not only make sense of an algorithm's behavior, but also restore human agency in systems that are intertwining humans and non-human agents (Ananny and Crawford, 2018) . How can we design for contestability in large-scale online NLP systems? What are the goals when designing for contestability in online systems? What are the challenges and limitations of the contestability ideal?",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Hirsch et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 126,
"end": 146,
"text": "Kluttz et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 1303,
"end": 1317,
"text": "(Bozdag, 2013;",
"ref_id": "BIBREF3"
},
{
"start": 1318,
"end": 1336,
"text": "Chen et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 1354,
"end": 1373,
"text": "(Chen et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 1374,
"end": 1394,
"text": "Hannak et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 1395,
"end": 1416,
"text": "Mikians et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 1432,
"end": 1452,
"text": "(Dwork et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 1792,
"end": 1819,
"text": "(Ananny and Crawford, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contestability:",
"sec_num": null
},
{
"text": "2 Designing Contestability in Large Online NLP Systems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contestability:",
"sec_num": null
},
{
"text": "The first step in designing for contestability is to layout the goals we would like to achieve when bringing the ideal of contestability in large-scale online socio-technical systems driven by language technologies. In most predictive systems, the goal is to contest a machine prediction in an attempt to correct a wrong decision. For example, in healthcare systems, contestability strives to improve the accuracy of ML models by deploying the system among expert users and then soliciting feedback from them (Hirsch et al., 2017) . However, the goal in large online social systems is more nuanced. For example, in an online ride sharing system a driver might want to contest their ride and route recommendation, a rider might want to contest the tip suggestion or the matched driver, resulting in a multi-stakeholder contestability problem. In this particular case, it is a complex assemblage of the algorithm and the two stake holders (the driver and the rider) who are both using the same instance of the ride sharing platform.",
"cite_spans": [
{
"start": 509,
"end": 530,
"text": "(Hirsch et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contestability:",
"sec_num": null
},
{
"text": "Here I outline two examples of NLP-based online systems along with the phenomenon that could be contested and the contestation goal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contestability:",
"sec_num": null
},
{
"text": "Contestability in Moderation Systems. Content moderation is a predominant practice in almost all social media platforms, be it, Twitter, Facebook, Reddit, and is now increasingly common in news platforms (e.g. Perspective APIs usage in NYT comment moderation interface). Most content moderation systems are based on linguistic models (see (Gunasekara and Nejadgholi, 2018 ) for a review). However, none currently have designs in place where a user can appeal against content removal. If the goal is to contest moderation practices, how do we design to meet that goal? Some social media platforms, like Reddit, have a twotier governance structure, the first tier enforced by the platform's content policy and a 2nd tier rule enforcement set by the human moderators within the community. Such differences in governance structures across online platforms make it almost impossible to come up with a \"one size fits all\" contestation goal when building a language-based moderation system.",
"cite_spans": [
{
"start": 339,
"end": 371,
"text": "(Gunasekara and Nejadgholi, 2018",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contestability:",
"sec_num": null
},
{
"text": "Language models are fundamental to driving online information retrieval systems, such as search and recommendation systems (see (Zhai, 2008; Hiemstra, 2001; Liu and Croft, 2004) ). For a regular user, online search and recommender systems have become an integral part of their daily lives. De-spite their increasingly important role in selecting, presenting, ranking, and recommending what information is considered most relevant for us-a key aspect governing our ability to meaningfully participate in public life (Gillespie, 2014) , there is no notion of whether this information is credible or whether the returned results are re-inforcing existing societal biases. Neither do users have any means to contest or challenge to understand why they are seeing what they are seeing on search platforms and their recommendation feeds. The lack of contestability in search platform coupled with an unwavering trust placed in search engines can together lead to misinformed citizenry. Our goal for including contestability in search systems, would be to contest potential inaccurate results, unreasonable rankings (e.g., a low credible alternative news source ranked higher than a reputed journalistic source), inaccurate recommendations (for e.g., YouTube recommending more pro-conspiracy videos after user watches one anti-vaccine video) or biased results.",
"cite_spans": [
{
"start": 128,
"end": 140,
"text": "(Zhai, 2008;",
"ref_id": "BIBREF27"
},
{
"start": 141,
"end": 156,
"text": "Hiemstra, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 157,
"end": 177,
"text": "Liu and Croft, 2004)",
"ref_id": "BIBREF23"
},
{
"start": 515,
"end": 532,
"text": "(Gillespie, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contestability in Informatin Retrieval Systems.",
"sec_num": null
},
{
"text": "What are the ways in which we can design for contestability in online systems? Does contestability help users improve their understanding and trust of large-scale online systems? One possibility is to draw inspiration from the design of intelligibility in context-aware systems (Lim et al., 2009) . Of particular interest are end-user programming systems which allowed users to ask questions when their expectations were not met (Ko and Myers, 2004) . Users asked why did questions when something unexpected happened and why not questions when something expected did not occur Myers, 2004, 2009) . Another line of work that is relevant is (Kulesza et al., 2011 (Kulesza et al., , 2009 What You See is What You Test for Machine Learning (WYSIWYT/ML) method for systematic testing of machine learning applications. WYSIWYT/ML offers three key functionalities: 1) advises the user about which predictions to test, 2) contributes more tests \"like\" the user's, 3) measures how much of the assistant's reasoning has been tested, and 4) continually monitors over time whether previous testing still \"covers\" new behaviors learned. Inspired from these approaches, here I lay a list of intuitive questions that can inform the design of contestability. By no means, this list is exhaustive. The hope is that discussions at the workshop will refine and expand the list. 1. Why: Why did the online system do X, where X can be recommend, suggest, rank, moderate, etc.?",
"cite_spans": [
{
"start": 278,
"end": 296,
"text": "(Lim et al., 2009)",
"ref_id": "BIBREF22"
},
{
"start": 429,
"end": 449,
"text": "(Ko and Myers, 2004)",
"ref_id": "BIBREF18"
},
{
"start": 577,
"end": 595,
"text": "Myers, 2004, 2009)",
"ref_id": null
},
{
"start": 639,
"end": 660,
"text": "(Kulesza et al., 2011",
"ref_id": "BIBREF20"
},
{
"start": 661,
"end": 684,
"text": "(Kulesza et al., , 2009",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "2. Why Not: Why did the system not do Y? For example, in designing contestability for content moderation in social platforms, a user can contest why did the system did not remove this other post?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "3. What if: What would the system do if Z happens? For example, what would Amazon recommend if I bought the lower ranked product from its ranked product list?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "4. How to: How can I get the system to do A, given the current context? For example, how can I get this ride sharing service to pair me with a female driver, given that I am a woman and I am traveling in a region notorious for crimes against woman?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "3 Challenges for Contestability in Large Online NLP Systems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "Infusing contestability in complex online systems can lead to unforeseen challenges. The ideal of contestability as a means to provide agency to users can be limited in multiple ways. Here I list a few.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "Contestability can unintentionally occlude. Ananny and Crawford (Ananny and Crawford, 2018) in their critical interrogation of the ideal of transparency argue that transparency to promote understandibility of black-box ML systems can backfire in various ways, one of them being intentional occlusion-\"visibility produces such great quantities of information that important pieces of information become inadvertently hidden in the detritus of the information made visible.\" Contestability can also result in similar occlusion. For example, an online system that offers too many options for contesting the system's decision may unnecessarily distract the user from the central information that the system intends to offer.",
"cite_spans": [
{
"start": 64,
"end": 91,
"text": "(Ananny and Crawford, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "Contestability can reinforce pre-existing biases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "Considering that most algorithms driving largescale online systems are personalized-i.e., they serve results based on user's past behavior-such personalization tend to re-enforce pre-existing biases. Imagine a factuality model that sits within an online system and determines the credibility of content based on linguistic features (such models already exist in the literature; see (Soni et al., 2014) and (Mitra et al., 2017) . Offering real-time corrections every time the user contests search results that goes against their pre-existing beliefs, often act as mandates and can backfire by inadvertently provoking users into attitude consistent misperceptions (Garrett and Weeks, 2013) . Another unforeseen scenario can occur for certain classes of search queries, popularly known as \"data voids\"-search terms for which the available relevant data is limited or non-existent (Golebiewski and boyd, 2018) .",
"cite_spans": [
{
"start": 382,
"end": 401,
"text": "(Soni et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 406,
"end": 426,
"text": "(Mitra et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 662,
"end": 687,
"text": "(Garrett and Weeks, 2013)",
"ref_id": "BIBREF8"
},
{
"start": 877,
"end": 905,
"text": "(Golebiewski and boyd, 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "Allowing the user to contest and introduce problematic corrections into the system defeats the purpose of contestability in such scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
},
{
"text": "Contestability has temporal limitations: When to introduce contestability? Decisions about when to allow contestability in an online social system can get too complex too quickly to reach any useful design decision. A large-scale social system has multiple stakeholders, each of whose contestability action at any point in time can effect the system and in turn the contestability decision of the next action. For example, a language model working in the moderation interface of a news website has to balance between several stakeholders: the users commenting on the news story, the moderator managing the commenting system, the reporter who wrote the story, and the editor who reviewed the story. Moreover online systems are continually changing over time as data are being generated, interacted, and added by users, and as the number of users interacting with the system change. Thus when thinking about designing for contestability in multi-stakeholder online systems, the notion of temporal dimension is key. I hope that the workshop will provide opportunities to brainstorm on these complex questions and stir new questions in the domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why, When, and How to Contest",
"sec_num": "2.1"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Power to the people: The role of humans in interactive machine learning",
"authors": [
{
"first": "Saleema",
"middle": [],
"last": "Amershi",
"suffix": ""
},
{
"first": "Maya",
"middle": [],
"last": "Cakmak",
"suffix": ""
},
{
"first": "William",
"middle": [
"Bradley"
],
"last": "Knox",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "35",
"issue": "",
"pages": "105--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine, 35(4):105-120.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Ananny",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "20",
"issue": "",
"pages": "973--989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountabil- ity. new media & society, 20(3):973-989.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Topicalizer: reframing core concepts in machine learning visualization by co-designing for interpretivist scholarship",
"authors": [
{
"first": "P",
"middle": [
"S"
],
"last": "Eric",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Siedel",
"suffix": ""
},
{
"first": "Jiayun",
"middle": [],
"last": "Mcdonnell",
"suffix": ""
},
{
"first": "Patricia",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Micki",
"middle": [],
"last": "Sittikul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcgee",
"suffix": ""
}
],
"year": 2020,
"venue": "Human-Computer Interaction",
"volume": "35",
"issue": "5-6",
"pages": "452--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric PS Baumer, Drew Siedel, Lena McDonnell, Ji- ayun Zhong, Patricia Sittikul, and Micki McGee. 2020. Topicalizer: reframing core concepts in ma- chine learning visualization by co-designing for in- terpretivist scholarship. Human-Computer Interac- tion, 35(5-6):452-480.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bias in algorithmic filtering and personalization",
"authors": [
{
"first": "Engin",
"middle": [],
"last": "Bozdag",
"suffix": ""
}
],
"year": 2013,
"venue": "Ethics and information technology",
"volume": "15",
"issue": "3",
"pages": "209--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Engin Bozdag. 2013. Bias in algorithmic filtering and personalization. Ethics and information technology, 15(3):209-227.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Investigating the impact of gender on rank in resume search engines",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ruijun",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le Chen, Ruijun Ma, Anik\u00f3 Hann\u00e1k, and Christo Wil- son. 2018. Investigating the impact of gender on rank in resume search engines. In Proceedings of the 2018 CHI Conference on Human Factors in Comput- ing Systems, page 651. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An empirical analysis of algorithmic pricing on amazon marketplace",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1339--1349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le Chen, Alan Mislove, and Christo Wilson. 2016. An empirical analysis of algorithmic pricing on amazon marketplace. In Proceedings of the 25th Interna- tional Conference on World Wide Web, pages 1339- 1349. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ml practicum: Fairness in perspective api",
"authors": [],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "developers.google.com. Ml practicum: Fairness in perspective api. https://developers. google.com/machine-learning/practica/ fairness-indicators. [Online; accessed 8-Jan-2021].",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fairness through awareness",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Dwork",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Hardt",
"suffix": ""
},
{
"first": "Toniann",
"middle": [],
"last": "Pitassi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Reingold",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 3rd innovations in theoretical computer science conference",
"volume": "",
"issue": "",
"pages": "214--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd inno- vations in theoretical computer science conference, pages 214-226. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The promise and peril of real-time corrections to political misperceptions",
"authors": [
{
"first": "Kelly",
"middle": [],
"last": "Garrett",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"E"
],
"last": "Weeks",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. CSCW",
"volume": "",
"issue": "",
"pages": "1047--1058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Kelly Garrett and Brian E Weeks. 2013. The promise and peril of real-time corrections to political misper- ceptions. In Proc. CSCW, pages 1047-1058. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The relevance of algorithms",
"authors": [
{
"first": "Tarleton",
"middle": [],
"last": "Gillespie",
"suffix": ""
}
],
"year": 2014,
"venue": "Media technologies: Essays on communication, materiality, and society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tarleton Gillespie. 2014. The relevance of algorithms. Media technologies: Essays on communication, ma- teriality, and society, 167.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Data voids: Where missing data can easily be exploited",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Golebiewski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boyd",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Golebiewski and danah boyd. 2018. Data voids: Where missing data can easily be exploited.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A review of standard text classification practices for multilabel toxicity identification of online content",
"authors": [
{
"first": "Isuru",
"middle": [],
"last": "Gunasekara",
"suffix": ""
},
{
"first": "Isar",
"middle": [],
"last": "Nejadgholi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd workshop on abusive language online (ALW2)",
"volume": "",
"issue": "",
"pages": "21--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isuru Gunasekara and Isar Nejadgholi. 2018. A review of standard text classification practices for multi- label toxicity identification of online content. In Pro- ceedings of the 2nd workshop on abusive language online (ALW2), pages 21-25.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Measuring price discrimination and steering on e-commerce web sites",
"authors": [
{
"first": "Aniko",
"middle": [],
"last": "Hannak",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Soeller",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lazer",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on internet measurement conference",
"volume": "",
"issue": "",
"pages": "305--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniko Hannak, Gary Soeller, David Lazer, Alan Mis- love, and Christo Wilson. 2014. Measuring price dis- crimination and steering on e-commerce web sites. In Proceedings of the 2014 conference on internet measurement conference, pages 305-318. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using language models for information retrieval",
"authors": [
{
"first": "Djoerd",
"middle": [],
"last": "Hiemstra",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djoerd Hiemstra. 2001. Using language models for in- formation retrieval. Citeseer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Designing contestability: Interaction design, machine learning, and mental health",
"authors": [
{
"first": "Tad",
"middle": [],
"last": "Hirsch",
"suffix": ""
},
{
"first": "Kritzia",
"middle": [],
"last": "Merced",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Zac",
"suffix": ""
},
{
"first": "David C",
"middle": [],
"last": "Imel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Atkins",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Designing Interactive Systems",
"volume": "",
"issue": "",
"pages": "95--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tad Hirsch, Kritzia Merced, Shrikanth Narayanan, Zac E Imel, and David C Atkins. 2017. Designing contestability: Interaction design, machine learning, and mental health. In Proceedings of the 2017 Con- ference on Designing Interactive Systems, pages 95- 99. ACM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Principles of mixed-initiative user interfaces",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the SIGCHI conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "159--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI con- ference on Human Factors in Computing Systems, pages 159-166.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Contestability and professionals: From explanations to engagement with algorithmic systems",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Kluttz",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [
"K"
],
"last": "Mulligan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Kluttz, Nitin Kohli, and Deirdre K Mulligan. 2018. Contestability and professionals: From ex- planations to engagement with algorithmic systems. Available at SSRN 3311894.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Values in design",
"authors": [
{
"first": "Cory",
"middle": [],
"last": "Knobel",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"C"
],
"last": "Bowker",
"suffix": ""
}
],
"year": 2011,
"venue": "Communications of the ACM",
"volume": "54",
"issue": "7",
"pages": "26--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cory Knobel and Geoffrey C Bowker. 2011. Values in design. Communications of the ACM, 54(7):26-28.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Designing the whyline: a debugging interface for asking questions about program behavior",
"authors": [
{
"first": "J",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Ko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the SIGCHI conference on Human factors in computing systems",
"volume": "",
"issue": "",
"pages": "151--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J Ko and Brad A Myers. 2004. Designing the whyline: a debugging interface for asking ques- tions about program behavior. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 151-158. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Finding causes of program output with the java whyline",
"authors": [
{
"first": "J",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Ko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1569--1578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew J Ko and Brad A Myers. 2009. Finding causes of program output with the java whyline. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1569-1578. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Where are my intelligent assistant's mistakes? a systematic testing approach",
"authors": [
{
"first": "Todd",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Burnett",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Stumpf",
"suffix": ""
},
{
"first": "Weng-Keen",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Shubhomoy",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Groce",
"suffix": ""
},
{
"first": "Amber",
"middle": [],
"last": "Shinsel",
"suffix": ""
},
{
"first": "Forrest",
"middle": [],
"last": "Bice",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Mcintosh",
"suffix": ""
}
],
"year": 2011,
"venue": "International Symposium on End User Development",
"volume": "",
"issue": "",
"pages": "171--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Todd Kulesza, Margaret Burnett, Simone Stumpf, Weng-Keen Wong, Shubhomoy Das, Alex Groce, Amber Shinsel, Forrest Bice, and Kevin McIntosh. 2011. Where are my intelligent assistant's mistakes? a systematic testing approach. In International Sym- posium on End User Development, pages 171-186. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Fixing the program my computer learned: Barriers for end users, challenges for the machine",
"authors": [
{
"first": "Todd",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Weng-Keen",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Stumpf",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Margaret",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Burnett",
"suffix": ""
},
{
"first": "Andrew J",
"middle": [],
"last": "Oberst",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ko",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 14th international conference on Intelligent user interfaces",
"volume": "",
"issue": "",
"pages": "187--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Todd Kulesza, Weng-Keen Wong, Simone Stumpf, Stephen Perona, Rachel White, Margaret M Burnett, Ian Oberst, and Andrew J Ko. 2009. Fixing the pro- gram my computer learned: Barriers for end users, challenges for the machine. In Proceedings of the 14th international conference on Intelligent user in- terfaces, pages 187-196. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Why and why not explanations improve the intelligibility of context-aware intelligent systems",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Anind",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Avrahami",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "2119--2128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Hu- man Factors in Computing Systems, pages 2119- 2128. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cluster-based retrieval using language models",
"authors": [
{
"first": "Xiaoyong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "186--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoyong Liu and W Bruce Croft. 2004. Cluster-based retrieval using language models. In Proceedings of the 27th annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 186-193.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Detecting price and search discrimination on the internet",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "Mikians",
"suffix": ""
},
{
"first": "L\u00e1szl\u00f3",
"middle": [],
"last": "Gyarmati",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Erramilli",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Laoutaris",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 11th ACM Workshop on Hot Topics in Networks",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakub Mikians, L\u00e1szl\u00f3 Gyarmati, Vijay Erramilli, and Nikolaos Laoutaris. 2012. Detecting price and search discrimination on the internet. In Proceed- ings of the 11th ACM Workshop on Hot Topics in Networks, pages 79-84. acm.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A parsimonious language model of social media credibility across disparate events",
"authors": [
{
"first": "Tanushree",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. CSCW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanushree Mitra, Graham Wright, and Eric Gilbert. 2017. A parsimonious language model of social media credibility across disparate events. In Proc. CSCW.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling factuality judgments in social media text",
"authors": [
{
"first": "Sandeep",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Tanushree",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (2)",
"volume": "",
"issue": "",
"pages": "415--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandeep Soni, Tanushree Mitra, Eric Gilbert, and Jacob Eisenstein. 2014. Modeling factuality judgments in social media text. In ACL (2), pages 415-420.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Statistical language models for information retrieval",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ChengXiang Zhai. 2008. Statistical language models for information retrieval. Synthesis lectures on hu- man language technologies, 1(1):1-141.",
"links": null
}
},
"ref_entries": {}
}
}