top of page

In recent years, there has been a growing attention to human-algorithm interactions in CHI and CSCW communities as the use of algorithmic decision-making systems grow. Algorithmic curation of content, including prioritizing, classifying and filtering content, is behind most influential social media sites [3]. Beyond social media, algorithmic decision-making systems are on the rise in a wide variety of domains, with recent work investigating their use in a number of settings, including classrooms [10], gig work [11, 13], archival practice [21], online advertising [7], and criminal justice systems [6], to name a few.

​

An emerging body of literature has shown that algorithmic systems (often based on machine learning approaches) can fail in multiple ways. One way algorithmic systems can fail is by not engaging with (and therefore losing the acceptance of) users and stakeholders [4, 5, 14]. There is often a disconnect between mathematically rigorous machine learning methods and organizational and institutional realities, constraints, and needs [22]. Furthermore, approaches that largely rely on automated processing of historical data can repeat and amplify historical stereotypes, discriminations, and prejudices [e.g., 1].

​

This workshop aims to contribute to these conversations by exploring how human-centered and participatory methods can inform the design of algorithmic systems.

This workshop approaches “participation” from multiple vantage points. Participatory approaches, for example, take the form of design projects that aim to incorporate stakeholders directly in the design of algorithms [e.g., 20]. User-centered design approaches, meanwhile, take users’ needs and concerns as an initial design input in the creation of algorithmic systems [e.g., 12]. Such projects echo the longstanding tradition of participatory design (PD) and user-centered approaches within CSCW and raise a unique set of concerns about values, priorities, and influence. For the spirit of PD to flourish, stakeholders must have the capacity for meaningful influence in the direction of a system’s development [15]. But holding true to this spirit can prove challenging when algorithms are often invisible [19], input into their development may not always be from direct community stakeholders [20], and users of the resulting algorithmic system might hold conflicting views and understandings of when and how algorithms operate [18]. Furthermore, the capacity for algorithms to be fair or enact fairness can be a topic of debate [14, 23] and communities may disagree on whether algorithms and other forms of automation are suitable interventions in particular contexts [2, 17]. 

​

A long line of research on “human-in-the-loop” systems and interactive machine learning techniques offers computational techniques to incorporate human inputs into algorithmic systems [e.g., 8]. Exploring applications of this work into the new social contexts of algorithmic system opens up new research questions. While the human relationship with algorithmic, machine action can be confrontational in some contexts [16], in others it might be leveraged to promote social welfare and community cohesion [9]. The point such tensions make is that algorithmic interactions are not zero-sum – neither wholly positive nor negative. Instead, the CSCW community is called to take care and pay close attention to how collaboration unfolds in such interactions, noting the ways in which individuals’ capacity, agency, and influence to participate is both shaped by algorithms but also the broader organizational and institutional settings within which such encounters take place.

​

Algorithms cannot be understood in isolation from the data that both shape and are shaped by them and the broader settings within which they are deployed and interact. Accordingly, this workshop is also interested in investigating the topic of algorithmic participation from a broader perspective, investigating where, how, and to what effect participation is a phenomenon of interest across data-driven ecosystems. How do questions of data collection, awareness, and consent, for example, inform the question of algorithmic participation? Such a focus can fuel conceptual discussions about the changing contours of participation in contemporary social computing landscapes. What does it mean to “participate” in such landscapes? Has the nature of “participation” changed, particularly in so-called “Big Data” systems that incorporate millions of users, and billions of data points? Such developments challenge us to probe further into the nature of participation, and what it takes to not only participate, but participate meaningfully.

​

This one-day workshop sets out a goal of creating a “tactical toolkit” – a workshop report aimed to support and provoke action through three objectives:

  1. Identifying cases and ongoing projects on the topic of participation in data-driven, algorithmic ecosystems

  2. Listing key challenges and strategies in this space;

  3. Setting a forward-facing agenda to provoke further attention to the changing role of participation in contemporary sociotechnical systems

References

​

[1]             Angwin, J. et al. 2016. Machine bias:  There’s software used across the country to predict future criminals. And it’s bia sed against blacks. ProPublica.

[2]             Bullard, J. 2016. Motivating Invisible Contributions: Framing Volunteer Classification Design in a Fanfiction Repository. Proceedings of the 19th International 

Conference on Supporting Group Work (New York, NY, USA, 2016), 181–193.

[3]             Diakopoulos, N. 2016. Accountability in Algorithmic Decision Making. Commun. ACM. 59, 2 (Jan. 2016), 56–62. DOI:https://doi.org/10.1145/2844110.

[4]             Dietvorst, B.J. et al. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. 144, 1 (2015), 114–126.

[5]             Dietvorst, B.J. et al. 2016. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science. 64, 3 (Nov. 2016), 1155–1170. DOI:https://doi.org/10.1287/mnsc.2016.2643.

[6]             Dressel, J. and Farid, H. 2018. The accuracy, fairness, and limits of predicting recidivism. Science Advances. 4, 1 (Jan. 2018), eaao5580. DOI:https://doi.org/10.1126/sciadv.aao5580.

[7]             Eslami, M. et al. 2018. Communicating Algorithmic Process in Online Behavioral Advertising. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), 432:1–432:13.

[8]             Gillies, M. et al. 2016. Human-Centred Machine Learning. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (New York, NY, USA, 2016), 3558–3565.

[9]             Halfaker, A. et al. 2014. Snuggle: Designing for Efficient Socialization and Ideological Critique. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2014), 311–320.

[10]           Jahanbakhsh, F. et al. 2017. You Want Me to Work with Who?: Stakeholder Perceptions of Automated Team Formation in Project-based Courses. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2017), 3201–3212.

[11]           Jhaver, S. et al. 2018. Algorithmic Anxiety and Coping Strategies of Airbnb Hosts. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), 421:1–421:12.

[12]           Lee, M.K. et al. 2017. A Human-Centered Approach to Algorithmic Services: Considerations for Fair and Motivating Smart Community Service Management That Allocates Donations to Non-Profit Organizations. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2017), 3365–3376.

[13]           Lee, M.K. et al. 2015. Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (New York, NY, USA, 2015), 1603–1612.

[14]           Lee, M.K. and Baykal, S. 2017. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (New York, NY, USA, 2017), 1035–1048.

[15]           Lynch, T. and Gregor, S. 2004. User participation in decision support systems development: Influencing system outcomes. European Journal of Information Systems. 13, 4 (2004), 286–301.

[16]           Oh, C. et al. 2017. Us vs. Them: Understanding Artificial Intelligence Technophobia over the Google DeepMind Challenge Match. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2017), 2523–2534.

[17]           O’Neil, C. 2017. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books.

[18]           Rader, E. et al. 2018. Explanations As Mechanisms for Supporting Algorithmic Transparency. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), 103:1–103:13.

[19]           Rader, E. and Gray, R. 2015. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (New York, NY, USA, 2015), 173–182.

[20]           Sen, S. et al. 2015. Turkers, Scholars, “Arafat” and “Peace”: Cultural Communities and Algorithmic Gold Standards. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (New York, NY, USA, 2015), 826–838.

[21]           Summers, E. and Punzalan, R. 2017. Bots, Seeds and People: Web Archives As Infrastructure. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (New York, NY, USA, 2017), 821–834.

[22]           Veale, M. et al. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), 440:1–440:14.

[23]           Woodruff, A. et al. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), 656:1–656:14.

[24]           Zhu, H. et al. (2018). Value-Sensitive Algorithm Design: Method, Case Study, and Lessons. Proceedings of the 21st ACM Conference on Computer Supported Cooperative Work & Social Computing (New York, NY, USA, 2018)

bottom of page