نوع مقاله : مقاله علمی پژوهشی
نویسندگان
1 دانشیار، گروه پژوهشی مطالعات اجتماعی اطلاعات، پژوهشگاه علوم و فناوری اطلاعات ایران (ایرانداک)، تهران، ایران.
2 کارشناس ارشد، گروه مدیریت رسانه، دانشکدۀ فرهنگ و ارتباطات، دانشگاه سوره، تهران، ایران.
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسندگان [English]
Objective
In today’s digital landscape, the rapid spread of unverified and misleading information, particularly on social media, poses significant challenges to modern societies. Fake news has emerged as a critical issue, threatening public trust, fueling social polarization, and damaging the legitimacy of institutions. However, beyond the issue of fake news itself lies a more systemic and strategic problem: the necessity of effective content moderation. As social media platforms play an increasingly central role in shaping public opinion, distributing information, and facilitating discourse, the ability to moderate fake news within these platforms has emerged as a critical dimension of media governance. Content moderation in this context is not limited to deleting or labeling posts; rather, it encompasses a multi-layered process involving policymaking, technological tools, user engagement, and cultural adaptation. This study aims to develop a comprehensive, context-sensitive framework for moderating fake news on domestic platforms by examining the strategies employed by two global leaders in the field—X (formerly Twitter) and Instagram. Through comparative analysis and in-depth engagement with domain experts, the research attempts to localize and adapt these strategies to the specific socio-cultural and regulatory environment of Iran.
Methodology
This research adopts a qualitative, comparative approach. Initially, the study reviewed policy documents, moderation guidelines, and institutional reports from both platforms to identify their respective content moderation mechanisms. In the second phase, data were gathered through semi-structured interviews with experts in digital policy, media regulation, and information governance. Thematic analysis was used to code and synthesize findings in a structured and iterative manner, facilitating the development of a localized and actionable moderation framework.
Findings
The results indicate that effective fake news moderation requires a dual-layered approach involving both off-platform and on-platform interventions. Off-platform strategies include legislative development, promotion of reliable news sources, enhancement of media literacy, institutional empowerment, public education campaigns, fostering critical discourse, development of detection algorithms, and encouraging civic engagement. On-platform strategies encompass internal content moderation guidelines, user verification systems, crowdsourcing fact-checking tasks, collaborations with official news organizations, algorithmic filtering, labeling and ranking content, and enabling transparent user feedback mechanisms. Notably, Iranian experts emphasized the importance of aligning these measures with local legal and cultural norms, ensuring algorithmic transparency, and offering users clear paths for appeal and redress.
Conclusion
Fake news, amplified by viral digital dynamics, can erode social cohesion and destabilize public discourse. However, the deployment of structured and context-aware content moderation strategies can both limit the spread of false information and enhance user trust in digital environments. The framework proposed in this study synthesizes international practices with localized insights to deliver a hybrid model suitable for domestic social media platforms. This model not only serves as a policy guideline for platform developers and regulators but also contributes to the broader agenda of rebuilding public trust and ensuring the credibility of online information ecosystems in Iran and similar contexts.
کلیدواژهها [English]