Examining Public Perceptions of Algorithm Transparency: An Empirical Analysis

: In the rapidly evolving landscape of the digital age, algorithms have become pivotal components of various systems, influencing the information and content consumers encounter. This empirical analysis delves into algorithm transparency’s intricate relationship and implications for consumer perception, trust, and behavioral intention. Given the prevalence of algorithms in shaping the digital media and consumption landscape, comprehending the public’s opinions and comprehension of algorithm transparency has gained paramount significance. Preliminary findings from this research spotlight the pivotal role of algorithm transparency in molding consumer trust and decision-making processes. As consumers increasingly interact with algorithmically curated


Introduction
Robert Brawns, a law professor at George Washington University, and Alan Goodman, a professor at the Rutgers School of Law, discussed in the article "Algorithm Transparency in Smart Cities" how to evaluate the value issues of algorithm judgment, practicality, and fairness in the case of incomplete disclosure of algorithm code.According to literature, in Facebook's algorithm factory, there are often four obstacles to measuring the transparency of local companies using predictive algorithms: 1) Lack of appropriate records surrounding the algorithm process.2) Need for more adherence to appropriate disclosure by the company.3) Project contractors have trade confidentiality statements or other confidential privileges.4) It takes work to interpret certain complex dynamic algorithms or models, even with generated records.These obstacles often make it difficult for us to understand and control algorithm transparency well and even result in many problems.
The findings of this study align with previous research on algorithm transparency.Bitzer et al. provide a comprehensive review and research framework on algorithmic transparency, emphasizing its concepts, antecedents, and consequences [1].Their work highlights the importance of transparency in promoting trust and accountability in algorithmic decision-making processes.
Criado et al. examine algorithmic transparency in the context of bureaucratic discretion, focusing on the SALER early warning system [2].Their research emphasizes the need for transparency in algorithmic systems to ensure legitimacy and public acceptance.The study by de Fine Licht and de Fine Licht emphasizes the significance of providing explanations in artificial intelligence (AI) systems to enhance transparency and perceived legitimacy [3].Their work suggests that transparent explanations of AI decision-making processes can enhance public trust and acceptance.Diakopoulos and Koliska discuss the role of algorithmic transparency in the news media [4].Their research sheds light on transparent news algorithms' challenges and potential benefits in promoting trust and credibility in news consumption.The study by Goad and Gal focuses on the impact of transparency on the legitimacy of algorithmic decision-making [5].Their research underscores the importance of transparency in enhancing the perceived legitimacy of algorithmic systems and promoting public trust.Grimmelikhuijsen explores how algorithmic transparency affects the perceived trustworthiness of automated decision-making [6].The study highlights the significance of transparent algorithms in fostering public trust and acceptance of automated decision-making processes.Kieslich et al. investigate the public perception of the importance of ethical design principles in artificial intelligence [7].Their research emphasizes the need for transparency and ethical considerations in designing and implementing AI systems.König, et al. discuss citizens' perception of public sector algorithms and emphasize the importance of effectiveness, transparency, and stakeholder involvement [8].Their work emphasizes the role of transparency in shaping citizens' perception of public sector algorithms.Shin and Park explore the role of fairness, accountability, and transparency in algorithmic affordance [9].Their research emphasizes the importance of transparency in promoting fair and accountable algorithmic systems.Wu et al. investigate the role of technical information transparency in the acceptance of health information technology [10].Their research highlights the importance of transparency in promoting the perceived smartness of cities and enhancing the acceptance of health information technology.
These referenced studies provide valuable insights into the role of algorithm transparency in promoting trust, accountability, and fairness in various domains.The findings of this current study further contribute to the existing body of literature on algorithm transparency, providing additional evidence for its importance in shaping consumer perception, trust, and behavioral intention.
The public needs a clear concept of algorithm transparency.Most existing research and surveys have pointed out that algorithm transparency needs to be more comprehensive and even unheard of by the public in China.The current survey aims to investigate everyone's views, understanding, and level of understanding of algorithm transparency.From the users' perspective, it is necessary to involve multiple roles, such as companies, citizens, and social groups, to safeguard their rights.However, it is essential to understand the subject citizens' understanding of algorithm transparency Based.The algorithm is a coding process with formal characteristics widely used in computers.With the arrival of the big data era, algorithms have become an extremely important part.As a form of government governance, data analysis can provide a wealth of useful information, which plays a directional role in establishing and planning cities and society, helping managers better plan and build.Allocate resources more reasonably and improve the efficiency and fairness of the government.This paper can conduct this survey to better delve into the lives of grassroots people and explain cognitive attitudes and perspectives on this "professional concept" from the perspective of ordinary people.It can better help the general public to understand daily operating principles in the era of big data.At the same time, improving the general public's awareness is conducive to strengthening the supervision of algorithm transparency and making more problems caused by algorithm transparency standardized.This survey aims to study people's views and attitudes towards algorithm transparency.At the same time, this paper also researched people's understanding of it.
This paper designed the first questionnaire to understand everyone's understanding of the basic concepts of algorithm transparency, while the second questionnaire is to understand how deep everyone has a specific understanding of algorithm transparency.The design idea of the questionnaire: 1.To understand everyone's understanding of algorithm transparency, this paper will ask the most basic questions, such as "When did you first learn about the concept of algorithm transparency?"or ask questions based on real-life examples.2. Targeting everyone's concerns about algorithm transparency.In terms of understanding, this paper combined the Cyberspace Administration of China's Measures for the Management of Generative Artificial Intelligence Services (Draft for Comments) to set the question and survey the respondents.
The decision to conduct the questionnaire survey in two stages was based on the need to comprehensively explore public perceptions of algorithm transparency.The first questionnaire aimed to assess participants' basic understanding of algorithm transparency by asking fundamental questions and probing their familiarity with the concept.This stage helped establish a baseline understanding among respondents.The second questionnaire delved deeper into participants' specific understanding of algorithm transparency, targeting their concerns, and incorporating relevant measures from the Cyberspace Administration of China's draft regulations on generative artificial intelligence services.This stage allowed for a more detailed examination of respondents' attitudes and perspectives.
The experimental results revealed several key findings.Most participants expressed a preference for algorithmic products that offer complete transparency.They recognized the importance of algorithm transparency in various areas of society, such as healthcare, finance, and education.Regulatory bodies were seen as having a responsibility to oversee algorithm transparency.
Participants demonstrated sensitivity towards algorithmic bias and discrimination, acknowledging the potential negative impact on certain individuals or groups.They expressed a willingness to provide feedback or report instances of algorithmic discrimination, aiming for issue resolution.
Regarding personal privacy, respondents showed a willingness to trade off certain aspects of privacy or anonymity to gain more data or algorithm transparency.However, they emphasized the need for legal measures and constraints to protect personal privacy.
In conclusion, the study highlighted the significance of algorithm transparency in shaping consumer perceptions, trust, and behavioral intentions.The findings emphasized the importance of balancing user privacy, business interests, and fairness in algorithmic decision-making.Regulatory oversight and measures to address algorithmic bias and discrimination were deemed essential for establishing a just and equitable digital society.Overall, this research contributes to a better understanding of public attitudes towards algorithm transparency and provides insights for policymakers, industry stakeholders, and researchers working in the field.

Methods
This paper focuses on investigating people's views and attitudes toward algorithm transparency.The concept of algorithm transparency refers to how an algorithm's input, output, and running process can be reasonably interpreted and understood.It plays a critical role in machine learning and artificial intelligence, as it allows individuals to comprehend how algorithms make decisions and the basis for those decisions.Algorithm transparency is essential in evaluating the reliability and fairness of algorithms, reducing biases, misunderstandings, and injustices arising from algorithmic decisions.To gather comprehensive insights, this paper employed two parts of questionnaire surveys.The first part aimed to assess participants' understanding of the basic concepts of algorithm transparency.It included questions about when individuals first encountered the concept and their general opinions on the importance of algorithm transparency in modern society.
Additionally, it sought to gauge participants' preference for algorithmic products based on their transparency level and trust in such products.The second part focused on specific aspects of algorithm transparency and its coordination with personal privacy.Participants were asked about their beliefs regarding potential biases and discrimination arising from algorithm transparency and their actions when encountering algorithmic discrimination.Furthermore, participants were asked about their willingness to trade privacy or anonymity for more data or algorithm transparency.The questionnaire also explored participants' opinions on which institutions should be responsible for ensuring algorithm transparency.By dividing the survey into two stages, this paper aims to capture a broader range of perspectives and better understand public perceptions of algorithm transparency.The survey design was carefully crafted to address general understanding and specific concerns about algorithm transparency and personal privacy coordination.Through the analysis of the collected data, this paper aims to provide valuable insights into the public's views, attitudes, and understanding of algorithm transparency.This knowledge can contribute to developing policies and practices that promote transparency, trust, and fairness in algorithmic systems.

3.
Experimental Results and Analysis

Data Collection
The data collection process involved the administration of two questionnaires, each with a sample size of 112 and 123, respectively.The participants were predominantly located in tier one and tier two cities.Notably, the questionnaire pertaining to the legality of artificial intelligence (AI) received a more concentrated distribution of respondents.

Algorithm Transparency Questionnaire Results
According to the data presented in Figure 1, a significant majority of 58.93% of respondents indicated a clear preference for utilizing algorithmic products that offer complete transparency.Furthermore, as depicted in Figure 2, 66.96% acknowledged the crucial significance and extensive adoption of algorithm transparency in contemporary society.Additionally, it was noted that 51.79% of participants demonstrated sensitivity towards the existence of algorithmic bias within products, recognizing that algorithm transparency can be influenced by such bias.They also indicated a willingness to provide feedback to platform providers regarding any identified issues proactively.Concerning the tolerance for personal privacy invasion by platforms, 50.89% of respondents were willing to sacrifice some privacy for more precise recommendations and believed that protecting personal privacy requires legal involvement and constraints.

Legality of Artificial Intelligence Questionnaire Results
In the questionnaire on the legality of AI, respondents generally acknowledged the imperfection of algorithm transparency and the presence of varying degrees of discrimination.As shown in Figure 3, 50.41% of participants believed that racial bias exists in AI algorithm transparency, followed by biases related to national origin (47.15%), gender, geographic location, and age.Under the existing legal framework, approximately 80% of respondents believed that AI should not infringe upon others' intellectual property rights and other legitimate citizen rights.Regarding generative AI products, which have recently sparked public debate, 77.24% of respondents agreed that providers have an obligation to disclose their technical standards to facilitate better social supervision.Furthermore, a separate survey conducted among 123 participants focused on algorithmic discrimination.As shown in Figure 4, the results showed that respondents identified discrimination based on race (50.41%), gender (44.72%), national origin (47.15%), age (41.46%), profession (37.4%), faith (39.84%), and geographic location (41.46%) within algorithm transparency.This indicates the presence of biased discrimination in many algorithmic products This paper rely on.Such bias may stem from inherent biases in the training data, as algorithmic decisions often rely on the data they are based on.In other words, human biases are rationalized and amplified by algorithmic processes.For algorithm developers, ensuring data diversity and fairness is crucial to avoiding algorithmic discrimination, and the path to achieving this relies on the transparency and disclosure of algorithmic principles.Regarding the discussion on algorithmic discrimination, 51.79% of participants demonstrated sensitivity towards algorithmic bias within products and expressed a willingness to provide feedback to platform providers regarding identified issues actively.As depicted in Figure 5, 50.89% of the participants believed that laws should regulate the relationship between algorithm transparency and personal privacy.This suggests that many individuals believe in the need for authoritative third parties, such as governments or industry associations, to regulate the use and transparency of algorithms.When encountering algorithmic discrimination, most respondents (51.79%) chose to provide feedback or report the issue in hopes of finding a resolution.These two surveys reflect the public's concerns regarding algorithmic discrimination and algorithm transparency.This paper must take these issues seriously and seek solutions to establish a just and equitable digital society.This may require balancing between protecting user privacy, safeguarding business interests, and ensuring fairness.It may also necessitate the development of new laws and policies to regulate and supervise the use of algorithms.
In this study, the two questionnaires focused on different aspects.Questionnaire one primarily explored the application and impact of algorithm transparency in people's lives, while questionnaire two delved deeper into algorithmic technical issues within AI.It is evident that information in modern society is complex and diverse, and the design of questionnaires should provide respondents with more possibilities for choices rather than crudely categorizing social entities and scopes.In the questionnaire survey on algorithm transparency, this paper cannot ascertain whether respondents' answers are based on past practical knowledge or current considerations, which introduces considerable uncertainty and instability to the survey.For the questionnaire survey on the legality of AI, further improvements should be made in the question path.The results showed that different answers may arise when conventional questions are applied to emerging phenomena.It is necessary to gain a deeper understanding of respondents' perspectives to supplement and refine the research content.

Conclusions
In conclusion, this paper explores the impact of algorithm transparency on consumer perception, trust, and behavioral intention.Through two stages of questionnaire surveys, the study investigated public views and attitudes toward algorithm transparency, covering both general understanding and specific concerns related to biases, discrimination, and personal privacy coordination.The findings revealed that the public highly values algorithm transparency, with a preference for algorithmic products that offer complete transparency.Participants recognized the importance of algorithm transparency in various aspects of modern society and expressed a willingness to act against algorithmic discrimination.The research emphasized the need for regulatory oversight and the involvement of authoritative institutions to ensure algorithm transparency.It also highlighted the trade-off between personal privacy and algorithm transparency, with participants willing to sacrifice certain aspects of privacy for more accurate recommendations.By shedding light on public perceptions, this paper contributes to the ongoing discussions surrounding algorithm transparency and its role in promoting trust, fairness, and accountability in algorithmic decision-making.The insights gained from this research can inform the development of policies and practices that enhance algorithm transparency and address concerns related to biases and discrimination.

Figure 1 :
Figure 1: The feedback from the questionnaire on algorithm transparency.

Figure 2 :
Figure 2: How do you acknowledge the importance of algorithm transparency.

Figure 3 :
Figure 3: Do you believe in racial bias exists in Al algorithm transparency.

Figure 5 :
Figure 5: Who should regulate the relationship between algorithm transparency and personal.