Advances in Consumer Research
Issue:5 : 2165-2170
Research Article
Digital Rights and Social Justice: Challenging Untouchability in the Digital Sphere
 ,
1
Research Scholar, School of Law, IFTM University, Moradabad, India
2
Associate Professor, School of Law, IFTM University, Moradabad, India
Received
Sept. 30, 2025
Revised
Oct. 17, 2025
Accepted
Nov. 18, 2025
Published
Nov. 25, 2025
Abstract

This study examines the intersection of digital rights and social justice by addressing the persistence of untouchability practices within the digital sphere. It explores how marginalized communities face exclusion, discrimination, and unequal access to digital technologies and platforms, thereby perpetuating inequities. This study analyzes legal frameworks, digital policies, and grassroots initiatives aimed at challenging digital untouchability and promoting inclusive digital participation. By highlighting the role of digital rights as a tool for social empowerment, this paper advocates for policy reforms and technological interventions that ensure equitable access and protect the dignity of all users in the digital environment

Keywords
INTRODUCTION

The rapid expansion of digital technologies across global societies has created unprecedented opportunities for connectivity, commerce and communication. However, this technological proliferation has simultaneously reinforced and amplified existing structural inequalities, creating what scholars have termed a "digital divide" that mirrors and magnifies historical patterns of social exclusion and discrimination (Sanders & Scanlon, 2021). The digital sphere, far from being a neutral space of technological advancement, has become a terrain where systemic injustices are reproduced, perpetuated, and often rendered invisible through the veneer of objectivity. This study examines the intersection of digital rights and social justice, arguing that achieving meaningful digital inclusion requires moving beyond technical solutions to address the fundamental human rights dimensions embedded within digital systems.

 

The right to digital access and participation has been internationally recognized as a fundamental human right. In 2016, the United Nations General Assembly declared access to the Internet a basic human right, establishing a normative framework that positions digital connectivity within the broader architecture of human dignity and social inclusion (Sanders & Scanlon, 2021). However, despite this recognition, millions of people, disproportionately from marginalized communities, remain excluded from meaningful digital participation. In the United States alone, millions still lack home access to high-speed Internet, with particularly stark disparities among low-income populations, people of color, older adults, Native Americans, and rural residents (Sanders & Scanlon, 2021). This structural reality perpetuates social, economic, and political disparities that extend far beyond a mere lack of technological access.

THE DIGITAL DIVIDE AS SOCIAL INJUSTICE

2.1 Understanding Digital Inequality

The concept of the digital divide has evolved significantly over the past two decades. Initially framed as a simple binary between those with and without Internet access, contemporary scholarship recognizes the digital divide as a multidimensional phenomenon encompassing infrastructure gaps, digital literacy disparities, affordability barriers, and equity-deserving group-sensitive content issues (Raihan et al., 2024). Research reveals that vulnerable populations, including low-income people, older adults, racial and ethnic minorities, newcomers and immigrants, Indigenous groups, people with disabilities, and women, face interconnected barriers that compound their digital exclusion (Raihan et al., 2024).

 

The 2020-2021 COVID-19 pandemic exposed and exacerbated these digital inequalities in a dramatic fashion. When schools, businesses, and government services transitioned to remote digital operations, people without reliable Internet access were systematically excluded from essential services. Educational interruption due to the pandemic revealed how social injustice, inequity, and the digital divide are inseparably linked, with uniquely targeted measures proving necessary to address fundamental disparities (Bozkurt et al., 2020). These crises demonstrated that digital access is not merely a matter of consumer convenience but a fundamental infrastructure requirement for economic participation, education, and civic engagement.

2.2 Data Justice and Visibility

Beyond questions of access and infrastructure, digital injustice manifests through the production of data and the operation of algorithmic systems that govern visibility and treatment of the data. Data justice, understood as fairness in the way people are made visible, represented, and treated as a result of their production of digital data, has emerged as a critical framework for analyzing digital inequality (Taylor, 2017). The three pillars of data justice—(in)visibility, (dis)engagement with technology, and antidiscrimination—integrate positive and negative rights and freedoms, challenging both current data protection regulations and the growing assumption that being visible through the data we emit constitutes part of the contemporary social contract (Taylor, 2017).

 

However, the power of data to sort, categorize, and intervene largely operates outside the frameworks of social justice activism. Data-driven discrimination advances at a similar pace to data processing technologies; however, awareness and mechanisms for combating such discrimination remain inadequate (Taylor, 2017). This gap between technological capacity and justice accountability creates persistent challenges in digital systems, where the harms of algorithmic decision-making are difficult to trace, understand, or remedy.

 

  1. Algorithmic Discrimination and Structural Inequality

3.1 The Problem of Algorithmic Bias

Artificial intelligence systems and algorithmic decision-making have become embedded in critical sectors, including criminal justice, employment, healthcare, education, credit scoring, and benefits administration. While often presented as objective alternatives to human decision-making, algorithmic systems frequently perpetuate and amplify the biases and discrimination they purport to eliminate (Nuredin, 2024). These systems operate based on datasets that inherently reflect societal prejudices and historical inequalities, generating discriminatory outcomes that disproportionately harm marginalized groups (Nuredin, 2024).

 

The mechanisms through which algorithmic bias manifests are complex and often unclear. Research has identified multiple types of algorithmic discrimination operating through various technical mechanisms and biased training processes (Ntoutsi et al., 2020). Unlike the explicit discrimination prohibited by civil rights laws, algorithmic discrimination frequently operates through technical mechanisms that obscure discriminatory intent while producing discriminatory outcomes. Algorithmic fairness approaches, while important, often prove insufficient because they operate at the level of individual algorithms rather than addressing the structural conditions that generate unequal outcomes (Hoffmann, 2019).

 

3.2 Transparency and Accountability Deficits

The complexity of machine learning systems creates particular challenges for accountability and remedies. Decision subjects—those affected by algorithmic decisions—frequently cannot understand why they received particular outcomes, let alone challenge or contest those decisions (Hoffmann, 2019). Content moderation technologies developed to manage online speech often fail to understand context and are applied without sufficient human rights standards, thereby failing to protect freedom of expression, access to information, and diversity in digital environments (Oliva, 2020).

 

The absence of meaningful transparency mechanisms perpetuates what scholars term legal estrangement and digital alienation. When algorithmic systems produce outcomes that systematically disadvantage particular groups but remain inscrutable to both operators and affected persons, trust in digital institutions erodes further (Soss and Weaver, 2017). This transparency deficit operates across multiple registers—technical, legal, and political—making it increasingly difficult to conceptualize, let alone achieve.

 

DIGITAL COLONIZATION AND GLOBAL INEQUITIES

4.1 The Persistence of Extractive Patterns

 

The development and deployment of digital technologies reflect the historical patterns of colonial extraction and knowledge appropriation. Western corporations develop technological systems imbued with Western values and perspectives and then export these systems with minimal regulation or critical scrutiny into contexts with radically different needs, values, and institutional capacities (Birhane, 2020). This algorithmic colonization reproduces the fundamental logic of traditional colonialism through technological means: concentration of control, extraction of value from peripheral regions, and subordination of local knowledge and agency (Birhane, 2020).

 

In the Global South, technology imported from the West often proves unfit for local problems while simultaneously impoverishing local technological development and creating dependency on foreign software infrastructure (Birhane, 2020). This technological subordination intersects with and reinforces economic and political inequalities, creating structural injustices that ensure digital systems benefit those already advantaged while excluding or disadvantaging those already marginalized (Mulder, 2020).

 

4.2 Migration, Displacement, and Data Harms

Certain populations face heightened vulnerability within digital systems. Refugees, internally displaced persons, and migrants face distinctive data protection challenges as humanitarian organizations increasingly deploy data-driven approaches that can inadvertently intensify the problems they aim to solve (Hayes, 2017). Mass surveillance, both state and corporate, disproportionately impacts already vulnerable populations, with migrants facing particularly acute risks of surveillance-enabled migration control and coercion (Hayes, 2017).

  1. LABOR, WORK, AND DIGITAL INCLUSION

5.1 Platform Labor and Precarity

Digital platforms have created new forms of employment while simultaneously generating new forms of precariousness and inequality. Platform-based remote work creates opportunities for regions with traditional economic constraints; however, these opportunities often reproduce and intensify existing hierarchies of freedom, flexibility, precarity, and vulnerability (Anwar & Graham, 2020). Workers in the Global South participating in digital gig economies frequently lack the labor protections, bargaining power, and social safety nets available to workers in wealthy nations, creating new forms of digital labor exploitation (Anwar & Graham, 2020).

 

The digitalization of work has also enabled new mechanisms of surveillance and control, with algorithmic management systems directing work processes while obscuring decision-making and limiting workers' collective organizing capacity. This represents the systematic intensification of labor extraction while appearing to reduce human bias and discrimination through technical objectivity.

 

5.2 Technology for Empowerment or Control

Technology advocates often emphasize women's empowerment through digital literacy and technology adoption, particularly for small- and medium-sized enterprises. However, without attention to structural inequalities and power asymmetries, technology can become another mechanism through which existing inequalities are reproduced (Akpuokwe et al. 2024). Meaningful technology-enabled empowerment requires integration with financial literacy, community support, access to capital, and policy frameworks that create enabling environments, rather than merely expecting technology itself to bridge gaps.

 

Content Moderation, Hate Speech, and Rights Protection

6.1 Automated Content Governance

Digital platforms mediate increasingly significant portions of public discourse; however, their content moderation systems often fail to protect fundamental rights, including freedom of expression, access to information, and protection from harassment. Automated content moderation technologies, while rapidly improving, frequently misunderstand context and cultural specificity, resulting in both the under-moderation of harmful content and the over-removal of protected speech (Oliva, 2020). These failures disproportionately impact marginalized communities, whose speech is more likely to be mischaracterized as harmful, while hate speech targeting them remains inadequately addressed.

 

Moreover, content moderation decisions lack meaningful transparency or appeal mechanisms through which affected individuals can challenge decisions that silence their voices or enable harassment of them. This represents a fundamental accountability deficit within digital platforms that exercise quasi-governmental power over modern public discourse without the corresponding governance structures or accountability mechanisms.

 

  1. Educational Justice and Digital Equity

7.1 The COVID-19 Crisis and Educational Exclusion

The pandemic-induced shift to emergency remote learning starkly revealed the digital divide in education. While some students transitioned seamlessly to online learning, others—disproportionately from low-income households, rural areas, and communities of color—were entirely excluded due to a lack of device access, reliable internet connectivity, or private study space (Bozkurt et al., 2020). These educational exclusions have long-term consequences for individual opportunities and collective social mobility.

 

Beyond access inequalities, digital educational tools often embed biases in their design, content and algorithmic systems. The increasing deployment of generative AI in educational contexts raises distinct ethical risks regarding student privacy, learner autonomy, and the perpetuation of discriminatory patterns (Chan, 2023). These technologies require robust governance frameworks centered on equity and rights protection, rather than merely deploying supposedly neutral technical tools.

 

HEALTH, DATA, AND HEALTHCARE EQUITY

8.1 Health Data and Algorithmic Medicine

Healthcare algorithms increasingly shape diagnoses, treatment, prognosis, and resource allocation decisions. These systems are trained on data that reflect the historical patterns of healthcare inequality and discrimination. Algorithmic bias in healthcare perpetuates existing disparities through mechanisms whereby algorithms optimize historically advantaged populations, thereby systematically underserving marginalized groups (Yao et al., 2022). When algorithmic systems produce outcomes that systematically disadvantage particular groups, a meaningful remedy requires an understanding of both the technical dimensions and the underlying structural conditions.

 

Assumptions embedded in medical algorithms can perpetuate beliefs while reproducing and intensifying discrimination in medical practice. Such patterns demonstrate how data justice failures in healthcare can create seemingly technical solutions that perpetuate injustice while obscuring their discriminatory operation through scientific authority (McCradden et al., 2020).

 

  1. Reforming Digital Systems: Toward Justice-Centered Approaches

9.1 Legal and Policy Frameworks

Multiple jurisdictions have begun developing data protection and digital rights regulations to govern algorithmic systems. Various regulatory frameworks have emerged to enhance accountability and transparency in decision-making systems (Brkan, 2019). However, its implementation and effectiveness remain contested, with particular concerns about whether rights-focused approaches adequately address the structural and collective dimensions of digital injustice.

 

Legal and regulatory approaches have several limitations. First, they tend to focus on transparency and disclosure requirements that may remain inadequate in the absence of meaningful enforcement mechanisms. Second, they frequently center on individual rights and agency while inadequately addressing the structural conditions that generate inequality. Third, they often emerge from wealthy nations and are imposed on Global South contexts without adequate consideration of local conditions, capacities, or alternative approaches.

 

9.2 Community-Based and Participatory Approaches

Emerging scholarship and practice emphasize community-centered approaches to algorithmic equity and the pursuit of data justice. Rather than relying exclusively on technical fixes or regulatory mandates, these approaches center affected communities' knowledge, priorities, and agency throughout technological development (Abebe et al., 2020). Such participatory approaches are more effective at identifying genuine harms, culturally appropriate solutions, and mechanisms for accountability that are responsive to community needs.

 

  1. Intersectionality and Structural Approaches

10.1 Intersectional Dimensions of Digital Inequality

Understanding digital inequality requires an intersectional analysis that recognizes how multiple marginalized identities interact to structure digital access, treatment, and opportunity. Digital divides do not affect people with single, homogeneous identities but rather complex subjects who navigate intersecting systems of inequality. Age, gender, race, disability status, socioeconomic position, immigration status, and other dimensions of social location interact to create distinctive patterns of digital inclusion and exclusion (Raihan et al., 2024).

 

Research on digital equity in healthcare, employment, education, and social services increasingly reveals that effective interventions require attention to these intersecting dimensions rather than treating marginalization as additive or separable. Responses that ignore intersecting vulnerabilities are inadequate to achieve genuine justice (Watson et al., 2020).

 

10.2 Centering Structural Racism and Systemic Injustice

Digital systems operate within and reproduce the broader structures of systemic racism, colonialism, patriarchy, and inequality. Technical solutions that ignore these structural foundations are inadequate for achieving genuine justice. Instead, efforts to advance digital justice must explicitly engage with how digital systems become enrolled in reproduce structural inequalities while simultaneously considering how digital infrastructure might be reconstructed to advance rather than undermine justice.

 

  1. The Role of Social Work and Human Rights Advocacy

11.1 Professional Advocacy and Systems Change

Social work's human rights approach and commitment to social justice position the profession as an important advocate for digital rights and justice. Consistent with established human rights frameworks, social workers can engage in advocacy efforts to advance policies and programs to alleviate digital divides, particularly for populations most impacted by digital exclusion and discrimination (Sanders & Scanlon, 2021). Advocacy operates at the individual, family, community, organizational, and policy levels.

 

Promising practices emerging from social work and related advocacy fields demonstrate potential pathways. Some models combine technical interventions addressing infrastructure gaps with community education, advocacy, and policy work to address structural barriers (Sanders & Scanlon, 2021). Others center relationship building, trust cultivation, and authentic partnerships between technology developers, policymakers, practitioners, and affected communities.

 

11.2 Decolonial and Justice-Centered Alternatives

Scholarship informed by critical theory offers important alternative frameworks for understanding digital justice beyond liberal rights-based approaches to it. Such approaches insist on centering diverse intellectual traditions and non-Western ways of knowing, rather than treating Western technical expertise and perspectives as universally appropriate (Mohamed et al., 2020). Alternative approaches suggest that genuine digital justice requires not merely including marginalized perspectives within existing technological systems but rather fundamentally reconsidering whose knowledge counts, what problems deserve technological solutions, and who benefits from particular technological developments.

CONCLUSION

The digital sphere, initially imagined as a borderless realm of opportunity transcending historical hierarchies, has become a terrain where structural inequalities are reproduced with remarkable fidelity. Addressing digital injustice requires moving far beyond technical solutions—regulatory frameworks for algorithm transparency, diversity initiatives, or ethics guidelines—important as they are. True digital justice demands a fundamental reconceptualization of digital systems to advance rather than undermine social equality.

 

This reconception requires several simultaneous shifts in thinking. First, centering human rights, particularly for those most marginalized by current digital systems, is essential. Second, it attends to the structural dimensions of digital inequality rather than treating digital divides as individual failures of access or adoption. Third, engaging with diverse perspectives and non-Western approaches to technology and society. Fourth, establishing meaningful accountability mechanisms through which affected communities can challenge discriminatory digital systems and demand remedies. Fifth, fundamentally democratizing digital governance so that decisions about technological systems reflect diverse interests and values rather than concentrating power among wealthy corporations and privileged, technical experts.

 

The COVID-19 pandemic, despite its devastation, revealed the urgency of digital justice work. When digital systems determine access to education, employment, healthcare, and government services, the infrastructure supporting these systems becomes fundamental to human dignity and social inclusion. However, the pandemic also demonstrated the possibility of rapid change—the speed with which educational institutions, employers, and governments adapted to expand digital access during the crisis suggests that the barriers to digital justice are not primarily technical but rather political, reflecting the current distribution of power and resources.

 

Thus, advancing digital justice requires political struggle, not merely technical innovation. This requires building coalitions among marginalized communities, allied professionals, policymakers, and technology developers committed to justice. It requires insisting that digital systems serve human flourishing, rather than corporate profit accumulation or state control. It requires recognizing that systematic exclusion based on ascribed identity has been reproduced in digital form and that challenging such digital exclusion remains central to advancing social justice itself.

REFERENCES
  1. (Sanders & Scanlon, 2021) Sanders, C. K., & Scanlon, E. (2021). The Digital Divide Is a Human Rights Issue: Advancing Social Inclusion through  Social Work Advocacy. Journal of Human Rights and Social Work , 6(2), 133-140.
  2. (Taylor, 2017) Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Internet Policy Review , 6(4), 1-14.
  3. (Raihan et al., 2024) Raihan, M., Subroto, S., Chowdhury, N., Koch, K., Ruttan, E., & Turin, T. (2024). Dimensions and barriers for digital (in)equity and digital divide: A systematic integrative review. Digital Transformation and Society , 1(1), 54.
  4. (Bozkurt et al., 2020) Bozkurt, A., Jung, I., Xiao, J., Vladimirschi, V., Schuwer, R., Egorov, G., Lambert, S., AlFreih, M., Pete, J., Olcott, D., Rods, V., Aranciaga, I., Bali, M., Alvarez, A. V., Roberts, J., Pazurek, A., Raffaghelli, J. E., Panagiotou, N., de Cotlogon, P., ... & Paskevicius, M. (2020). A global outlook on  the interruption of education due to the  COVID-19 pandemic. Zenodo , 3878572.
  5. (Nuredin, 2024) Nuredin, A. (2024). Algorithmic bias in law: The discriminatory potential and legal liability of AI-based decision-support  systems. ISC Conference Proceedings , 1-18.
  6. (Ntoutsi et al., 2020) Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernández, M., Alani, H., Berendt, B., Kruegel, T., Heinze,  & Staab, S. (2020). Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , 10(3), e1356.
  7. (Hoffmann, 2019) Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. New Media & Society , 21(4), 883-905.
  8. (Oliva, 2020) Oliva, T. D. (2020). Content moderation technologies: Applying human rights standards to protect freedom of expression. Harvard Human Rights Law Review , 43, 341-381.

 

  1. (Soss & Weaver, 2017) Soss, J., & Weaver, V. M. (2017). Police are our government: Politics, political science, and the policing of race-class-subjugated  communities. Annual Review of Political Science , 20, 565-591.
  2. (Birhane, 2020) Birhane, A. (2020). Algorithmic colonization of Africa. SCRIPTed , 17(2), 389-409.
  3. (Mulder, 2020) Mulder, F. (2020). Humanitarian data justice: A structural data justice lens on civic technologies in post-earthquake Nepal. Third World Quarterly , 41(11), 1826-1845.
  4. (Hayes, 2017) Hayes, B. (2017). Migration and data protection: Doing no harm in the  age of mass displacement, mass surveillance,  and big data. International Review of the Red Cross , 99(904), 637-659.
  5. (Anwar & Graham, 2020) Anwar, M. A., & Graham, M. (2020). Between a rock and a hard place: Freedom, flexibility, precarity,  and vulnerability in the gig economy in Africa. Work, Employment and Society , 34(2), 162-179.
  6. (Akpuokwe et al., 2024) Akpuokwe, C. U., Chikwe, C. F., & Eneh, N. E. (2024). Leveraging technology and financial literacy for women's empowerment in SMEs: A conceptual framework for sustainable development. Global Journal of Education and Training , 18(3), 0041.
  7. (Bozkurt et al., 2020) Bozkurt, A., Jung, I., Xiao, J., Vladimirschi, V., Schuwer, R., Egorov, G., Lambert, S., AlFreih, M., Pete, J., Olcott, D., Rods, V., Aranciaga, I., Bali, M., Alvarez, A. V., Roberts, J., Pazurek, A., Raffaghelli, J. E., Panagiotou, N., de Cotlogon, P., ... & Paskevicius, M. (2020). A global outlook on  the interruption of education due to the  COVID-19 pandemic: Navigating in a time of uncertainty and crisis. Zenodo , 3878571.
  8. (Chan, 2023) Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education , 20, 408-423.
  9. (Yao et al., 2022) Yao, R., Zhang, W., Evans, R., Cao, G., Rui, T., & Shen, L. (2022). Inequities in health care services caused by the adoption of digital health technologies: A scoping  review. JMIR mHealth and uHealth , 10(2), e34144.
  10. (McCradden et al., 2020) McCradden, M. D., Joshi, S., Mazwi, M., & Anderson, J. A. (2020). Ethical limitations of algorithmic fairness solutions in healthcare  machine learning. The Lancet Digital Health , 2(5), e199-e209.
  11. (Brkan, 2019) Brkan, M. (2019). The essence of the fundamental rights to privacy and data protection: Finding the way through the maze of the CJEU's constitutional reasoning. German Law Journal , 20(4), 881-903.
  12. (Abebe et al., 2020) Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020). Roles of  computing in social change. In  the  Conference on Fairness, Accountability, and Transparency  (pp. 252-260). PMLR.
  13. (Watson et al., 2020) Watson, M. F., Bacigalupe, G., Daneshpour, M., Han, W. J., & Parra-Cardona R. (2020). COVID-19 interconnectedness: Health inequity, climate crisis, and collective trauma. Family Process , 59(3), 897-910.
  14. (Mohamed et al., 2020) Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology , 33(4), 659-684.
Recommended Articles
Original Article
Non-Performing Assets And Bank Liquidity: A Study Of Indian Commercial Banks
Research Article
Examining the Impact of Gold Standard Project-Based Learning Implementation and Professional Learning Communities on Teacher Knowledge Growth: the Mediating Role of School Support in Chinese Schools
...
Published: 25/11/2025
Research Article
Importance of Spirituality in Human lives – A Modern-Day Approach
...
Published: 25/11/2025
Research Article
Human Rights Violation and Legal Protections of Elderly Person’s Rights: A Global Prospective
Published: 25/11/2025
Loading Image...
Volume 2, Issue:5
Citations
29 Views
20 Downloads
Share this article
© Copyright Advances in Consumer Research