“Social media is but a slurry pool, draining inexorably into the cesspit that is the Internet”
In the digital age, social media platforms have become integral to everyday communication, information dissemination, entertainment, and even politics. Facebook, Instagram, X (formerly Twitter), TikTok, Snapchat, YouTube, and LinkedIn—among others—boast billions of users collectively. These platforms offer convenience, connectivity, and entertainment. However, beneath the surface lies a complex web of privacy concerns, data harvesting practices, and algorithmic influences that significantly shape how users think, feel, and act online.
Facebook, owned by Meta Platforms, is one of the oldest and most influential social media networks. Its vast ecosystem includes Messenger, Instagram, and WhatsApp. Facebook allows users to share content, join groups, advertise, and create events, making it a hub of personal and commercial activity.
Instagram focuses on image and video sharing, stories, reels, and direct messaging. It has a younger user base and leans heavily on visual appeal. Like Facebook, Instagram also leverages vast amounts of user data to optimise content and advertisements.
X is a platform for short-form posts, often used for news, opinion, and trending discussions. It’s widely adopted by journalists, politicians, celebrities, and activists. Its fast-paced, public nature makes it both influential and controversial.
Owned by Chinese company ByteDance, TikTok has revolutionised short-form video sharing with an emphasis on entertainment and virality. It uses a powerful recommendation engine that can hook users for hours.
Snapchat popularised ephemeral messaging and AR filters. Its focus is on one-to-one interactions, and it's particularly popular among teenagers and young adults. While it claims to delete content quickly, metadata and behaviour are still tracked.
While primarily a video-sharing platform, YouTube functions as a hybrid between social media and traditional media. Owned by Google, it uses data from across the Google ecosystem to serve highly targeted ads and recommendations.
LinkedIn is a professional networking site used for job hunting, recruiting, industry news, and skill development. While less intrusive than others, it still collects a vast amount of career-related user data.
All major platforms collect both explicit and implicit data. Explicit data includes information users knowingly provide: names, birthdays, photos, posts, likes, and messages. Implicit data, however, is often invisible to users and includes:
Many companies track users even when they are not on the platform. For instance, Facebook pixels embedded on websites, and Google trackers in mobile apps, enable surveillance across the internet. This blurs the boundaries between platforms and invades user privacy beyond the confines of a single service.
While many companies deny “selling” data outright, they often share it with advertisers, partners, and third-party analytics firms. Facebook’s Cambridge Analytica scandal highlighted how harvested data could be used to build psychological profiles and influence political outcomes.
Many platforms use deceptive UX design—known as “dark patterns”—to nudge users into sharing more than they intended. Opt-out settings are often hidden, privacy terms are opaque, and default settings tend to maximise data sharing.
Most platforms justify data collection as a means to personalise the user experience. Feeds are tailored to individual interests, boosting engagement and satisfaction. However, this often creates filter bubbles where users are only exposed to views that reinforce their own.
Advertising is the primary revenue source for most social platforms. By analysing user data, platforms enable advertisers to target extremely specific demographics—down to behaviours, interests, and purchasing intent. This level of granularity was unthinkable in the traditional advertising era.
Platforms don’t just reflect preferences; they predict and shape them. By learning when users are most susceptible to influence, systems can time notifications or content to prompt reactions, purchases, or extended usage.
Data from social media is also used to train AI models, including facial recognition, natural language processing, and sentiment analysis. This raises ethical concerns, especially when users’ images and conversations are fed into machine learning systems without informed consent.
Algorithms are sets of instructions that determine what content appears in a user’s feed. Instead of showing posts in chronological order, platforms use ranking systems to display what is “most relevant” or engaging—based on complex criteria derived from user behaviour.
Algorithms exploit human psychology. By rewarding users unpredictably (as in slot machines), they create dopamine loops. Likes, shares, and comments give intermittent reinforcement, encouraging compulsive checking and scrolling.
One of the most dangerous effects of algorithmic curation is the amplification of misinformation. Controversial or emotionally charged content tends to generate higher engagement, leading to the prioritisation of conspiracy theories, fake news, and outrage-inducing posts.
Algorithms can nudge users towards increasingly extreme content. This is especially evident on platforms like YouTube or TikTok, where the recommendation engine can lead users down rabbit holes of radical political content, pseudoscience, or hate speech.
On the flip side, platforms also suppress certain content through algorithmic demotion or outright bans. While this may be necessary to curb abuse or hate speech, it raises concerns about transparency and potential political bias.
Most social platforms operate in a regulatory grey area. While Europe’s GDPR has imposed stricter data protections, enforcement remains patchy. The US lacks comprehensive federal privacy legislation, leaving much power in the hands of tech giants.
Some voices advocate for ethical design principles: minimising data collection, avoiding manipulative patterns, and promoting user well-being. However, such efforts often conflict with profit motives.
Young users are especially at risk. Instagram’s impact on teen mental health has been well documented, and TikTok has been criticised for exposing children to harmful content. Data collection from underage users raises legal and moral questions.
In authoritarian countries, social media data can be weaponised. Surveillance, censorship, and algorithmic suppression become tools of state control. Even in democracies, mass data harvesting presents national security risks and civil liberties issues.
Awareness is the first defence. Understanding what data is collected and how it's used empowers users to make informed choices about what they share and with whom.
Most platforms offer granular privacy controls. Users should disable location tracking, limit ad personalisation, and periodically review third-party app access.
Privacy-focused alternatives like Mastodon (a decentralised Twitter-like service) or Signal (for messaging) offer more ethical models of online interaction. While they lack the user base of major platforms, they represent a growing pushback against surveillance capitalism.
Civil society, regulators, and users must demand transparency, ethical design, and meaningful control over personal data. Governments must pass comprehensive data protection laws with real teeth, and companies must be held accountable when they violate user trust.
Social media has transformed the way humans connect, communicate, and consume information. Yet behind the glossy interfaces lie powerful systems that harvest data, shape perception, and manipulate behaviour. These systems operate largely outside public scrutiny, driven by algorithms optimised for engagement rather than truth or well-being. While platforms have brought people together, they’ve also fragmented societies, eroded privacy, and undermined democratic discourse. Only through transparency, regulation, ethical design, and informed citizenship can we hope to steer the future of social media toward the public good rather than private gain.