Skip to the content.


Date December 9, 2022
Location Virtual

While machine learning (ML) models have achieved great success in many applications, concerns have been raised about their potential security, privacy, fairness, transparency and ethics issues when applied to real-world applications. Irresponsibly applying machine learning to mission-critical and human-centric domains such as healthcare, education, and law can lead to serious misuse, inequity issues, negative economic and environmental impacts, and/or legal and ethical concerns. For example, existing research has well documented that a machine learning model can exhibit discrimination against already-disadvantaged or marginalized social groups, such as BIPOC and LGBTQ+; moreover, it has been demonstrated that a machine learning model may unintentially leak sensitve personal information such as medical records; last but not least, machine learning models are often regarded as “blackboxes” and can produce unreliable, unpredictable and unexplanable outcomes especially under domain-shifts or macliciously crafted attacks.

To address these negative societal impacts of ML, researchers have looked into different principles and constraints to ensure trustworthy and socially responsible machine learning systems. This workshop makes the first attempt towards bridging the gap between security, privacy, fairness, ethics, game theory, and machine learning communities and aims to discuss the principles and experiences of developing trustworthy and socially responsible machine learning systems. The workshop also focuses on how future researchers and practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.

This workshop aims to bring together researchers interested in the emerging and interdisciplinary field of trustworthy and socially responsible machine learning from a broad range of disciplines with different perspectives to this problem. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations. Topics of this workshop include but are not limited to:

Points of difference: This workshop aims to raise awareness of the societal issues involved in applying machine learning to real-world systems, and stimulate interdisciplinary research that can tackle open challenges on building trustworthy and socially responsible machine learning models. Although some of our topics may overlap with other workshops (e.g., attacks on ML are discussed in workshops on adversarial robustness), these workshops do not have a central theme on the social responsibility of machine learning and do not aim to directly address societal issues of ML.

Diversity Statement

We have taken various steps to expand the diversity of the participants. The organizers and invited speakers have diverse backgrounds (e.g., gender, race, affiliations, seniority, and nationality). In particular, many are from underrepresented groups in STEM fields: the organizers include female scholars and the confirmed speakers include female scholars as well as researchers from underrepresented races. The organizers include both experienced and senior members as well as junior researchers, including people who have not organized this workshop series. Moreover, The workshop encompasses researchers from both industry and academia. Additionally, the workshop is inclusive and covers a wide range of topics (e.g., fairness, transparency, interpretability, privacy, robustness, etc.), which has raised a lot of attention now. We also aim to bring together both theoretical and applied researchers from various domains.