Relationship to other online harm reduction disciplines
Internet safety operates alongside several related disciplines.Types of online harm experienced
Internet safety addresses what are commonly referred to as online harms - the various ways that technology can be misused to cause damage to individuals, communities, and society. These harms that occur through the (mis)use of technology can be categorized into several interconnected types. Harms can occur immediately through direct actions, or take effect over time through gradual manipulation or erosion of autonomy and/or perceptions of safety and security. Psychological Harm: Damage to mental health and wellbeing through experiences of cyberbullying, harassment, exposure to disturbing content, addiction-like behaviors, and the erosion of self-esteem through social comparison. This category also includes grooming, manipulation, and other forms of psychological abuse facilitated by digital platforms. Financial Harm: Economic damage through fraud, scams, identity theft, unauthorized transactions, and other forms of financial exploitation. This includes both direct monetary losses and longer-term economic consequences such as damaged credit or compromised financial accounts. Physical Harm: Threats to physical safety that originate online, including stalking that moves offline, sharing of location data that enables real-world harassment, encouragement of self-harm or dangerous behaviors, and coordination of offline violence or abuse. Societal Harm: Damage to democratic processes, social cohesion, and public discourse through misinformation, hate speech, extremist recruitment, election interference, and the amplification of harmful conspiracy theories. This category includes threats to institutional trust and social stability. These categories often overlap and interact with each other. For example, financial scams may cause both economic and psychological harm, while misinformation campaigns can lead to both societal damage and individual psychological distress. The interconnected nature of these harms requires comprehensive approaches that address multiple dimensions simultaneously.Harmful activities
The activities and behaviors that contribute to the harm experienced are commonly categorized using the "4 C's" framework: Content, Contact, Conduct, and Commercial risks. Content Risks: Harms arising from exposure to problematic material online. This includes violent or disturbing imagery, hate speech, misinformation, content promoting self-harm or suicide, developmentally inappropriate material such as pornography accessible to children, and extremist content that promotes dangerous ideologies or activities. Contact Risks: Harms occurring through direct interaction with others online. This encompasses cyberbullying and harassment, grooming for sexual exploitation, unwanted contact from strangers, stalking and persistent unwanted communication, and recruitment for harmful activities including extremist groups or criminal enterprises. Conduct Risks: Harms resulting from the individual's own online behavior, often influenced by digital environments. This includes sharing personal information inappropriately, engaging in risky behaviors encouraged online, participating in harmful challenges or trends, excessive screen time affecting wellbeing, and creating or sharing harmful content that may later cause regret or consequences. Commercial Risks: Harms arising from exploitative commercial practices and inappropriate transactional relationships online. This includes fraud and financial scams, identity theft for economic gain, exploitative marketing practices targeting vulnerable users, inappropriate collection and use of personal data for commercial purposes, and predatory monetization of user engagement or addiction-like behaviors. These categories recognize that harmful activities often involve complex interactions between platform design, user behavior, and external actors with malicious intent. The 4 C's framework is primarily focused on individual-level activities and risks. While this captures many important dimensions of online safety, some harms manifest at the societal level through systemic effects that may not be reducible to individual experiences - such as the erosion of democratic discourse, institutional trust, or social cohesion through coordinated manipulation of information ecosystems.Multidisciplinary foundations
Internet safety draws from a wide range of academic disciplines and professional fields, each contributing distinct perspectives, methodologies, and expertise to understanding and addressing online harms. This multidisciplinary approach reflects the complex nature of technology-mediated risks, which cannot be adequately addressed through any single lens or domain of knowledge.Multistakeholder approach
The multidisciplinary nature of internet safety challenges necessitates a multistakeholder approach, bringing together different sectors with complementary expertise, responsibilities, and capabilities. No single organization or sector has the knowledge, authority, or resources to address the full spectrum of online harms effectively. This collaborative model recognizes that sustainable solutions require coordination across government, industry, civil society, academia, and user communities. Government and Regulators play a crucial role in developing legal frameworks and enforcement mechanisms that establish baseline safety standards. They set compliance requirements for platforms and services, fund research initiatives and public awareness campaigns, and facilitate international cooperation on cross-border issues. Regulatory bodies also provide oversight and accountability mechanisms that ensure other stakeholders fulfill their responsibilities. Technology Companies and Platforms are responsible for implementing safety-by-design principles in their products and services. This includes developing and maintaining content moderation systems, community management processes, and user empowerment tools that give individuals control over their online experiences. Companies also contribute through transparency reporting, external audits, and collaboration with other stakeholders on emerging challenges. Civil Society and NGOs advocate for user rights and the protection of vulnerable populations while providing digital literacy education and training programs. These organizations conduct independent research, develop policy recommendations, and support victims of online harm through direct services. They also serve as important bridges between affected communities and other stakeholders, ensuring that policy discussions reflect real-world impacts. Academic and Research Institutions provide the evidence base for understanding online harms and evaluating the effectiveness of interventions. They develop new safety technologies and approaches, train professionals in the field, and conduct independent research that informs policy and practice. Universities also serve as neutral spaces for multistakeholder dialogue and collaboration. Users and Communities practice digital citizenship and provide peer support within online spaces. They report harmful content and behaviors, participate in safety education initiatives, and advocate for safer online environments. User communities also contribute valuable insights about emerging risks and the real-world effectiveness of safety measures through their lived experiences.Approaches to online safety
Internet safety employs both proactive and reactive approaches to address online harms. Proactive measures focus on preventing harms before they occur through thoughtful design, education, and regulation and increasing both system level and individual user resilience. Reactive measures address harms that have already manifested, providing response mechanisms and support for those affected.Proactive safety measures
Safety by Design incorporates safety considerations into technology development from the earliest stages, including user interface design, algorithmic systems that minimize harmful content amplification, and platform architectures that protect user privacy and autonomy. Digital Literacy and Education builds users' capacity to navigate online spaces safely, recognize risks, critically evaluate information, and develop healthy relationships with technology through schools, community programs, and public awareness campaigns. Regulation establishes legal frameworks, safety standards, and compliance requirements that platforms and services must meet. This includes laws governing content moderation, data protection, child safety, and transparency reporting obligations. Positive Digital Citizenship promotes respectful and constructive online behaviors through community building, social norm development, and programs that encourage empathy and ethical reasoning in digital contexts. Empowerment Tools provide users with controls over their online experience, including content filtering, privacy settings, blocking mechanisms, and tools to manage their digital footprint according to their preferences and risk tolerance.Reactive safety measures
Emerging approaches
The field continues to evolve as technology and online behavior change. Key areas of development include algorithmic accountability to ensure recommendation and content moderation systems operate fairly and transparently, privacy-preserving safety measures that protect user privacy while preventing harm, enhanced global governance mechanisms for addressing cross-border online harms, inclusion and equity initiatives to ensure safety measures protect all users particularly marginalized communities, and mental health integration that better incorporates digital wellness considerations into safety frameworks.Global frameworks and governance
The complex, cross-border nature of online harms has catalyzed the development of new governance models that reflect internet safety's multidisciplinary and multistakeholder foundations. These emerging frameworks move beyond traditional regulatory approaches to embrace collaborative models that bring together governments, technology companies, civil society, and academic institutions. Regional Legislative Frameworks represent coordinated attempts to establish comprehensive safety standards. TheResearch and evidence base
The field of internet safety is supported by growing research evidence across multiple domains, providing the empirical foundation for understanding online harms and developing effective interventions. Prevalence Studies: Large-scale surveys measuring the extent and nature of online harm across different populations and platforms. Notable examples include the EU Kids Online network, which has conducted comprehensive surveys across 25 European countries, surveying over 25,000 children and revealing that exposure to various online risks varies significantly by country and demographic factors. Similarly, the Global Kids Online initiative, led by UNICEF and LSE, has extended this research globally, surveying over 14,000 internet-using children across multiple countries to understand digital experiences in diverse cultural contexts. Pew Research Center studies represent another significant contribution, showing that 46% of U.S. teens have experienced online bullying or harassment, with documented demographic variations in both platform usage and risk exposure. Impact Research: Studies documenting the psychological, social, and economic effects of online experiences on individuals and communities. Key examples include work by the Cyberbullying Research Center, where studies by Hinduja and Patchin have demonstrated significant connections between cyberbullying experiences and increased rates of suicidal ideation among adolescents, with both victimization and perpetration linked to mental health impacts. Research by the Young and Resilient Research Centre at Western Sydney University provides another example, exploring how digital participation affects youth resilience and wellbeing, particularly among marginalized communities, with studies examining over 8,000 children and young people from more than 80 countries. Intervention Effectiveness: Randomized controlled trials and other rigorous evaluations of safety measures and educational programs. Examples include assessments of digital literacy curricula, evaluations of content moderation techniques, and studies measuring the effectiveness of bystander intervention programs in reducing online harassment. For instance, the Young and Resilient Research Centre has developed and evaluated youth-centered approaches to online safety education, demonstrating the importance of including young people's voices in designing interventions. Technology Evaluation: Research on the effectiveness and unintended consequences of content moderation systems, recommendation algorithms, and other safety technologies. Studies examine accuracy rates of automated content detection, potential biases in algorithmic decision-making, and the broader impacts of platform design choices on user behavior and wellbeing. One example of collaborative research in this area is the work of the Global Internet Forum to Counter Terrorism, which has contributed research on hash-sharing databases and collaborative technical approaches to identifying harmful content across platforms.Current challenges
Despite significant advances in understanding and addressing online harms, the field of internet safety continues to face several persistent and emerging challenges that require ongoing attention and innovative solutions. Guided Autonomy vs Developmental Mismatch: Child development requires gradual exposure to risk and complexity with appropriate support. However, digital environments often bypass this developmental scaffolding, exposing children to content and interactions before they have the capacity to handle them safely. Scale and Automation: The volume of online content and interactions makes it difficult to identify and address harmful behavior at scale, leading to reliance on automated systems that may lack nuance. Cross-Platform Coordination: Harmful actors often operate across multiple platforms, requiring coordination between companies that may compete with each other. Cultural and Linguistic Diversity: Safety approaches developed in one cultural context may not translate effectively to others, requiring localized solutions. Emerging Technologies: New technologies such as artificial intelligence, virtual reality, and blockchain create novel safety challenges that existing frameworks may not address. Balancing Safety and Rights: Ensuring that safety measures do not disproportionately restrict freedom of expression, privacy, or other fundamental rights.See also
*References
{{reflistExternal links