Skip to main content

Smart tech and safe choices: why Safer Internet Day 2026 matters more than ever

Artificial intelligence (AI) is no longer something happening “in the background” of society. It is part of everyday digital life for children, young people and adults, often without us even realising it.

As we mark Safer Internet Day on Tuesday 10 February, with the theme ‘smart tech, safe choices – exploring the safe and responsible use of AI,’ it is an important moment to pause and reflect on how AI is shaping the way we learn, work and interact online.

Dr Atefeh Tate, Lecturer in Information Systems at the University of Salford, explains why this year is a critical turning point for AI awareness, digital safety and responsible use.

Dr Atefeh Tate

AI in everyday digital life

AI is already embedded in many of the digital tools people use every day. Recommendation systems on platforms such as YouTube, TikTok, Netflix and Spotify use AI to decide what content users see next. Search engines rely on AI to rank results, predict queries and personalise information, while social media platforms use AI to filter content, moderate harmful material and target advertising.

For children and young people in particular, AI shapes online experiences through gaming algorithms, adaptive learning platforms, automated content moderation and even camera filters. Adults encounter AI through navigation apps, online shopping recommendations, fraud detection in banking and workplace tools that automate scheduling, reporting or decision support.

Because these systems are built into familiar apps and services, many people do not recognise them as AI at all. This lack of visibility can make it harder for people to question how decisions are made, what data is being used, and whether systems are operating fairly, safely and responsibly.

Why 2026 is a critical moment

AI has moved from being a specialist or experimental technology into something that directly affects everyday decision-making, learning, work and social interaction. Generative AI tools are now widely accessible to the public, including children and young people, often without clear guidance, education or safeguards.

At the same time, regulation, education and public understanding are struggling to keep pace with rapid technological change. Organisations are adopting AI faster than they are developing policies, skills and ethical frameworks to govern its use responsibly.

From an Information Systems perspective, the challenge is not only how AI systems are built, but how people interact with them, trust them, and integrate them into everyday life. This makes 2026 a critical point to ensure that AI literacy, digital responsibility and informed choice become part of mainstream digital education, rather than responding only after harm has already occurred.

Understanding the risks

AI tools such as chatbots and generative systems can sound confident and authoritative, even when their outputs are incorrect, biased or incomplete. Without critical thinking, users may accept responses at face value, leading to misinformation or poor decision-making.

Data privacy is another significant concern. Many AI tools rely on large volumes of user data, and people may not fully understand what information they are sharing, how it is stored, or how it may be reused. For children and young people, this raises safeguarding and consent issues.

There are also risks related to academic integrity, digital wellbeing and dependency. Using AI without appropriate guidance can blur the line between support and substitution, especially in learning contexts. Over time, this may affect confidence, motivation and the development of critical digital skills.

Bias and unfairness remain persistent challenges, as AI systems are trained on existing data that can reflect and reinforce social inequalities.

Building smarter, safer choices

In the UK, 2026 marks an important milestone in digital regulation and safeguarding AI. The Online Safety Act, alongside wider AI governance policies, reflects a shift towards accountability, transparency and responsibility; recognising that technological innovation must be balanced with public protection and digital wellbeing. While the Act does not regulate AI directly, it addresses the outcomes and impacts of AI-driven systems used by online platforms, including recommender algorithms, content moderation tools and generative technologies.

Safer Internet Day provides an opportunity to move beyond fear or hype and focus on informed, responsible use. Smart technology requires smart choices. That means helping people understand how AI works, what risks it presents, and how they can engage with it critically and safely.

As AI continues to shape everyday digital life, building confidence, awareness and responsibility will be just as important as developing the technology itself.