The rapid
development of Artificial Intelligence (AI) has the potential to revolutionize
various aspects of our lives, offering opportunities for increased efficiency,
productivity, and even societal progress. However, this rapid growth also
raises concerns about potential social injustices. AI systems, often trained on
biased data, can perpetuate existing inequalities and discriminatory practices.
This disconnect between AI development and principles of social justice
necessitates a proactive approach to ensure equitable and responsible
implementation of this powerful technology. This paper aims to propose
concrete policy recommendations that bridge this gap and align AI development
and deployment with principles of social justice. These recommendations focus
on three key areas:
By addressing these crucial aspects, we can
work towards harnessing the benefits of AI while mitigating potential
injustices, ultimately fostering a future where this technology serves as a
tool for positive societal change.
Social
injustice caused by AI
AI has the
potential to usher individuals into a new era of technological advancements and
opportunities. However, this rapid development of AI also brings forth
potential social injustices that need to be addressed. These potential social
injustices include biased decision-making, privacy concerns, and increased
inequality. To develop strategic policies that align AI development with social
justice, it is crucial to address these issues and mitigate their impact on
marginalized communities. This can be achieved by implementing regulations and
guidelines that promote transparency, accountability, and fairness in AI
systems. By ensuring that AI algorithms are regularly audited and monitored for
bias and discriminatory outcomes, we can minimize the potential harm they may
inflict on vulnerable populations. Additionally, it is important to prioritize
the inclusion and representation of diverse voices in the development and
implementation of AI technologies to prevent the perpetuation of existing
social inequalities. This can be achieved by fostering collaborations between
AI developers, policymakers, and community organizations to ensure that the
concerns and needs of marginalized communities are considered. By actively
involving these stakeholders in the decision-making process, we can work
towards creating AI systems that are ethically sound, socially beneficial, and
aligned with principles of justice and equity (Bardhan and Engstrom, 2021).
Additionally, AI is likely to increase efficiency and productivity, as machines
can make decisions faster and without human intervention. For instance,
self-driving cars could be deployed to patrol roads, potentially reducing
accidents to a minimum. Overall, AI and advanced robots are transforming the
job landscape in developed countries as they shift from manufacturing to
service and knowledge-based industries. These changes will require new
strategies and may necessitate significant social, political, and economic
transformations as the very fabric of our society undergoes a fundamental
shift. To delve deeper, it is crucial to define and understand social justice
and its core principles.
Social
justice is challenging to define due to its broad usage in political and moral
discourses. In political philosophy, it is intertwined with culture, social
change, freedom, solidarity, and human rights. Social justice is linked with
equality and equity, although they are distinct concepts. Professor Martha
Albertson Fineman suggests that social justice goes beyond legal aspects and
requires a welfare state. Fairness is often associated with natural and legal
rights when discussing social justice, with citizens expecting fair resource
distribution from the government. Dr. Brian W. Meeks emphasizes the importance
of equity and fairness in creating a just society (Bernhardt, 2023).
AI privacy
violations
The recent
expansion of AI as the latest technological phenomenon that can help us
automate tasks offers an opportunity to reassess the challenge to privacy that
mass data collection and profiling creates. Big data, or broad and varied data
sets that reveal complex patterns and trends, is what AI technology uses as its
raw material in order to learn, adapt, and develop. This requirement for vast
amounts of data has led to a corresponding rise in mass data collection and the
advancement of increasingly intricate and refined profiling (Li et al., 2023).
This raises significant concerns about privacy violations and possible
discrimination. To protect people from privacy infringements and the
discriminatory effects of mass data collection and profiling, strategies for
enforcing strict regulations and safeguards must be put in place. The primary
concern with this type of profiling is that it intrudes on the privacy rights
of the individuals whose information is being processed. The current law, the
Data Protection Act 1998 (DPA), provides a system of regulation that is
intended to strike a balance between the usefulness of technology and the
protection of individual privacy rights (Vanberg, 2021).
However,
there are a few ways in which the option for this regulatory regime might be
seen as insufficient to deal with the operation of AI for the purpose of mass
data collection and profiling. For example, when every data subject's consent
is systematically avoided by the data controller, our rights as individuals can
be undermined. Also, the requirement that consent must be expressed as 'informed'
is very difficult to assess from the perspective of those responsible for
upholding and enforcing our privacy rights. These challenges call for a
reassessment of the balance between our right to privacy and the benefits of
using technology within the current legal framework. This takes on an even
greater urgency when we consider the potential for AI technologies to make more
advanced and intrusive forms of mass data collection and profiling possible in
the near future. If left unchecked, it could become increasingly difficult to
argue that the balance provided for in the DPA between privacy and the demand for
technology is being maintained.
One of the
most difficult issues around AI technology is the extremely complex nature of
modern algorithms and the lack of transparency of these algorithms to the
public. Algorithms are step-by-step procedures for calculations, data
processing, and automated reasoning. When most people think of algorithms, they
might think of a recipe or a set of instructions for carrying out mathematical
operations, and indeed, these definitions are good, non-technical descriptions.
The problem is that modern algorithms—while also following a set of
instructions—incorporate the use of massive amounts of data and have become so
uniquely complex that now algorithmic transparency and understanding can often
involve the development and utilization of new fields of knowledge and new
computational methodologies (Tsamados et al., 2021). Now, well-educated
computer programmers and those working within the field of algorithmic
accountability are often the only people equipped to understand the real nature
of the algorithms that are shaping increasingly large aspects of our lives.
Such a high degree of what could be referred to as "algorithmic
elitism" is undemocratic and reinforces the divide between the powerful
and the disempowered in society. By keeping ordinary people and crucial
oversight bodies in the dark about not only the function and potential
consequences of algorithm-moderated decisions but also about the very nature of
an algorithm itself, companies and governments are able to avoid accountability
by creating and perpetuating a knowledge gap.
With the
rise of "big data" and machine-learning technologies, any laws or
policies that require explanations for algorithm-moderated decisions are likely
to be difficult to meaningfully implement; machine-learning algorithms can
develop their outputs without being explicitly programmed with new training
data that is factual and/or socially relevant to their declared function. This
is problematic for established accountability practices, such as public reasons
and the ability to question and review the decision-making of those in power,
because it becomes almost impossible to explain and justify algorithm-moderated
decisions when their true nature is guarded by the few with the expertise to
understand them.
Proposed
regulations for responsible data collection, storage, and usage
Proposed
regulations should mandate that data collectors elucidate their data collection
purposes. Thus, the regulation requires transparent data collection processes,
and with the personal data protection legislation in the UK, this regulation
will provide the needed direction and requirements for responsible data
collection (Jo and Gebru, 2020). The legislation requires that unclear,
illegitimate, and unfair data collection steps should be avoided and anyone
contravening the same is liable to legal suits and actions. Data storage and
usage regulations are essential in that they help serve the interests of
personal data owners, users, and data security. The proposed regulations
therefore mandate that data storage and usage data controllers should ensure
that: first, each data is stored and protected in its specific data storage and
should not be accessible or used outside the express authority given for its
usage. Data user accounts and accessibility should be monitored, and any breach
of the data usage regulations should lead to legal action. Data servers’
locations and data movement procedures must ensure data security and control.
Each data creator or person responsible for the data must be able to easily
assess the data, check its accuracy and usability, and alter the data as far as
necessary.
Mitigating
bias in AI systems
To
effectively mitigate bias and prevent AI systems from deepening social
inequalities, diverse representation in data collection and analysis is
crucial. Data diversity ensures that AI models reflect the complexity of our
world, reducing their likelihood of perpetuating harmful stereotypes and
discriminatory outcomes. This strategy directly counters the argument that
increased scrutiny of AI systems for bias hinders innovation. Instead, as the
Regulatory Impact Assessment of the proposed EU Artificial Intelligence Act
2021 recognizes, incorporating a "risk-based approach" with
"public safety and security requirements" can actually boost
responsible AI development. Therefore, a critical lens focused on bias does not
stifle innovation; rather, it redirects innovation towards more ethical and
impactful outcomes, benefiting all of society.
Data
anonymization and privacy-preserving technologies
One of the
most promising strategies for advancing the social good in the context of AI is
to regulate the protection of data, given that AI applications feed on large
amounts of data, some of which will contain sensitive identifying information.
The report of the 2019-2020 California Attorney General recommended the use of
anonymized and de-identified information to alleviate the risks associated with
the handling of personal information. Both anonymization and de-identification
aim to protect individual privacy by distinct people within a given dataset.
Firstly, the main distinction between anonymization and de-identification is
the presence of a Trusted Third Party, who converts the identifiers into
anonymized codes provided only to the researchers or the public authority in
charge of the data, ensuring an additional layer of confidentiality in the data
processing cycle, called pseudonymization. De-identification involves
the process of rendering personal data identifying individuals from being
're-identified', by removing or altering the identifiers from a dataset, such
as names, addresses, telephone numbers and so on, or applying technical
measures to 'segment the data or restrict the recipient's access'. By following
certain detailed guidelines from new statutory provisions and reading the
criteria in a constructive manner, it is reasonable to believe that businesses
and public authorities alike would have the incentive to ensure that data is stored
and processed in a way that artificial intelligence will eventually benefit
society in the future.
Algorithmic
auditing
Algorithmic
auditing, as a subset of algorithmic accountability, examines algorithmic
models, their developments, and applications, assessing both holistic and
specific qualities of algorithms according to ethical and moral standards and
assessing the potential and actual impacts of algorithms on individual, social,
and system levels. Despite the fact that auditing as an activity has already
been heavily discussed in the social informatics literature, the actual
practice of algorithmic auditing in AI development is still more or less in its
nascent stages. Moreover, the emerging practice of assessing algorithmic
fairness and equity adds another dimension to the concept of algorithmic
auditing, which will require not just different but also more comprehensive
means of performing such a type of auditing in AI development. It is important
to establish standardized guidelines and frameworks for algorithmic auditing.
Through the inclusion of a strong emphasis on dissecting the experienced and
interconnected backgrounds of information and recognizing the correlation
between data comprehension and societies, critical studies on data have the
potential to offer direction for aligning algorithmic examination and inventive
participatory approaches in a calculated manner. This subsequently presents
avenues for performing audits that are intricately intertwined with society.
Given the
wide range of potential uses for algorithmic impact assessments—health care,
unemployment, and evaluation in the criminal justice system—a broad examination
will require input from a variety of domain experts. Nevertheless, the UK
experience with data protection impact assessments has shown that, as long as
the scope of a given assessment is well defined, these assessments can serve as
a useful and practical step in implementing these kinds of recommendations. The
law could, for example, provide a set of criteria for deciding what the
appropriate level of algorithmic impact assessment ought to be for a given
application. Such decision criteria might depend on certain attributes of the
algorithm in question. For example, a bigger emphasis on public participation
in the assessment process could be mandated in cases where the algorithm in
question is used in a decision-making process that affects members of the
public or where any information used as an input to the algorithm is obtained
from the public. But any effort to operationalize these high-level standards
will still require careful consideration of contextual and technical
differences across the potential uses of algorithmic impact assessments. This
conditional approach of tying the level of impact assessment required to the
nature of the algorithm and the scope of its potential uses could help
alleviate concerns about the practicality of ex-ante assessments. This approach
would also avoid the inflexible burden of a blanket requirement for algorithmic
impact assessments and instead allow for the level of assessment to be tailored
to the specific needs of the affected public and the nature of the relevant
algorithm.
Standardized
metrics and procedures for identifying and mitigating algorithmic bias
The NAACP
Data Responsibility Workgroup proposes the implementation of standardized
metrics and procedures to detect algorithmic bias, in addition to requesting
algorithmic impact assessments. They suggest the adoption of bias
"standards" that algorithmic systems must follow to be deemed
acceptable for important decision-making. Another idea is to utilize
"anti-bias training data" to minimize the negative effects of
discriminatory algorithmic decisions. This could involve publicly disclosing
instances of proven algorithmic discrimination or analyzing training data
extensively for known biases. The workgroup asserts that standardizing the
process to identify algorithmic bias will ensure effective problem-solving with
all parties involved. These proposed solutions align with the evolving field of
"Fairness, Accountability, and Transparency in Machine Learning,"
which aims to address algorithmic bias. A different approach is proposed by the
European Union, which requires companies to disclose the training data used by
machine learning algorithms under the proposed Artificial Intelligence Act,
2021 (Khan and Mer, 2023). This Act will establish regulations concerning the
use and development of AI that can significantly impact individuals' rights or
obligations. Specifically, organizations will need to provide a meaningful
subset of training data that evaluates a critical aspect of a person's
well-being and is used for predictions by high-risk systems. This approach
emphasizes understanding and addressing potential algorithmic discrimination
patterns within decision-making processes while not excluding the use of
standardized testing and measures of bias as proposed by the NAACP. The EU
approach reflects an ongoing debate regarding the balance between promoting
social trust through transparency measures and concerns about existing biases
persisting in systems without sufficient oversight.
Conclusion
For policy
actions to ensure fairness in the development and implementation of AI, our
conclusion emphasizes the significance of proactive measures to ensure that AI
is created, advanced, and utilized in a manner that benefits society and
improves the future for all members of society, particularly those who may be
susceptible to social exclusion or digital discrimination. In this regard, AI
should be understood as a tool to enable more inclusive, just, and equitable
societies where human rights and social justice are strongly advocated and
upheld. Nevertheless, these visions will not be realized through technological
solutions or hands-off approaches. It is crucial to establish and sustain a
dynamic network of stakeholders from the government, industries, civil society
groups, and the public, where social, ethical, and legal concerns related to AI
development and deployment are openly and inclusively discussed. Furthermore,
regulators and policymakers should facilitate the transition from principles to
tangible outcomes by effectively regulating and governing the various stages of
AI development. It is imperative for the community to take decisive action now
and move in the right direction to ensure that all members of society reap the
long-term benefits of the rapid advancement of AI and digital revolutions. By
critically examining the current predominant non-interventionist,
technology-driven approaches in AI policy and proposing future roles for
governments in facilitating a human-centered and well-governed approach to AI
development and deployment, the suggested recommendations have significant
implications for contemporary digital societies and have the potential to bring
about transformative changes that embody fairness, equality, and social
justice.
References
Bardhan, Nilanjana
R., and Craig L. Engstrom, "Diversity, Inclusion, and Leadership Communication
in Public Relations: A Rhetorical Analysis of Diverse Voices." Public
Relations Journal 12, no. 4 (2021), https://prjournal.instituteforpr.org/wp-content/uploads/Bardhan_PRJ14.2-1.pdf.
Bernhardt,
Mark, "American Cold War Hospitality: Portraying Societal Acceptance and
Class Mobility of Mexican, Cuban, and Chinese Immigrants in 1950s
Sitcoms." Journal of Cinema and Media Studies 62, no. 4 (2023), https://muse.jhu.edu/pub/349/article/904625/summary.
Jo, Eun
Seo, and Timnit Gebru. "Lessons from Archives: Strategies for Collecting Sociocultural
Data in Machine Learning." Proceedings of the 2020 conference on fairness,
accountability, and transparency, 2020, acm.org.
Khan,
Farha, and Akansha Mer, "Embracing Artificial Intelligence Technology:
Legal Implications with Special Reference to European Union Initiatives of Data
Protection," Digital Transformation, Strategic Resilience, Cyber
Security and Risk Management (Leeds: Emerald Publishing Limited, 2023)
119-141, https://www.emerald.com/insight/content/doi/10.1108/S1569-37592023000111C007/full/html.
Li, Joey, Munur
Sacit Herdem, Jatin Nathwani, and John Z. Wen, "Methods and applications
for Artificial Intelligence, Big Data, Internet of Things, and Blockchain in
smart energy management." Energy and AI 11 (2023), https://www.sciencedirect.com/science/article/pii/S2666546822000544.
Roche, Cathy,
P.J. Wall, and Dave Lewis, "Ethics and diversity in artificial
intelligence policies, strategies and initiatives." AI and Ethics 3
(2023), https://link.springer.com/article/10.1007/s43681-022-00218-9.
Tsamados,
Andreas, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria
Taddeo, and Luciano Floridi, "The ethics of algorithms: key problems and
solutions." AI & Society 37 (2021), https://link.springer.com/content/pdf/10.1007/s00146-021-01154-8.pdf.
Vanberg, Diker Aysem, "Informational privacy post GDPR–End of the Road or the Start of a Long Journey?," The International Journal of Human Rights 25, no. 1 (2021), gold.ac.uk.
©2024 Trends Research & Advisory, All Rights Reserved.
Reviews (0)