Bias, Fairness and the Machine in the Sustainable Digital Ecosystem: An Analytical Survey on Confronting Inequality in the Digital World
Main Article Content
Abstract
The bursting growth of digital technologies has changed the way we classify, assess risk and allocate resources. Algorithmic models are increasingly operating as infrastructures of measurement in contemporary institutions. This research builds a theoretical accounting, derived accounting perspective to explore algorithmic bias and fairness in the sustainable digital ecosystems. Through analytical synthesis of interdisciplinary scholarship and documented empirical cases, the paper conceives algorithmic systems as extensions of accounting infrastructures with functions of recognition, measurement and classification. It argues that the sorts of biases built into the processes of data selection, proxy-based measurement and optimization represent forms of representational distortion like misstatement in accounting systems. Integrating together the distributive justice, capability theory and accountability scholarship, the study shows that the issue of algorithmic fairness is not a technical problem, but essentially a governance and institutional design problem. Comparative analysis on regulatory developments shows emerging dependency on risk-based classification, disclosure requirements and audit like oversight, and ongoing lack of developed areas of enforceability and contestability. The paper concludes that sustainable algorithmic governance would need structured accountability mechanisms that embed auditability, participatory oversight and contextual regulatory integration which would extend measurement, audit, and sustainability accounting theory into the field of algorithmic decision-making.
Article Details
Section

This work is licensed under a Creative Commons Attribution 4.0 International License.
How to Cite
References
[1]Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
[2]Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity Press.
[3]Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (pp. 149–159). https://doi.org/10.1145/3287560.3287583
[4]Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
[5]Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91).
[6]Burchell, S., Clubb, C., Hopwood, A. G., Hughes, J., & Nahapiet, J. (1980). The roles of accounting in organizations and society. Accounting, Organizations and Society, 5(1), 5–27. https://doi.org/10.1016/0361-3682(80)90017-3
[7]Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
[8]Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1–33.
[9]Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People: An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
[10]Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347. https://doi.org/10.1145/230538.230561
[11]Gray, R., Kouhy, R., & Lavers, S. (1995). Corporate social and environmental reporting: A review of the literature and a longitudinal study. Accounting, Auditing & Accountability Journal, 8(2), 47–77. https://doi.org/10.1108/09513579510146996
[12]Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
[13]Hines, R. D. (1988). Financial accounting: In communicating reality, we construct reality. Accounting, Organizations and Society, 13(3), 251–261. https://doi.org/10.1016/0361-3682(88)90003-7
[14]Hopwood, A. G. (1987). The archaeology of accounting systems. Accounting, Organizations and Society, 12(3), 207–234. https://doi.org/10.1016/0361-3682(87)90038-9
[15]Irani, L. (2015). The cultural work of microwork. New Media & Society, 17(5), 720–739. https://doi.org/10.1177/1461444813511926
[16]Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087
[17]Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
[18]Messner, M. (2009). The limits of accountability. Accounting, Organizations and Society, 34(8), 918–938. https://doi.org/10.1016/j.aos.2009.07.003
[19]Miller, P., & Power, M. (2013). Accounting, organizing, and economizing: Connecting accounting research and organization theory. The Academy of Management Annals, 7(1), 557–605. https://doi.org/10.5465/19416520.2013.783668
[20]Nussbaum, M. C. (2011). Creating capabilities: The human development approach. Harvard University Press.
[21]Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
[22]Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
[23]Power, M. (1997). The audit society: Rituals of verification. Oxford University Press.
[24]Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469–481). https://doi.org/10.1145/3351095.3372828
[25]Roberts, J. (1991). The possibilities of accountability. Accounting, Organizations and Society, 16(4), 355–368. https://doi.org/10.1016/0361-3682(91)90027-C
[26]Sambasivan, N., Kapania, S., Highfill, H., Akrong, D., Paritosh, P., & Aroyo, L. M. (2021). “Everyone wants to do the model work, not the data work”: Data cascades in high-stakes AI. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–15). https://doi.org/10.1145/3411764.3445518
[27]Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68). https://doi.org/10.1145/3287560.3287598
[28]Sen, A. (1999). Development as freedom. Oxford University Press.
[29]Tinker, T., Merino, B., & Neimark, M. (1982). The normative origins of positive theories: Ideology and accounting thought. Accounting, Organizations and Society, 7(2), 167–200.