It has transformed how we communicate, how we share thoughts, ideas and views, how we get our news, how we learn, how we show our emotions.
It has brought together people who are physically and ideologically apart.
It has driven commerce, grown economies and created jobs.
Social media is great; except when it is misused.
Because in the hands of irresponsible or dangerous people, it can pose a serious threat.
There has been a lot of focus in this regard, for example, around Facebook recently following a spate of incidents involving violence and death on its platform.
In April a gunman shot dead a grandfather in Cleveland, Ohio and then uploaded a video of the killing to the social network which remained there for some time before it was taken down.
That same month a Thai man live-streamed himself murdering his 11-month old daughter on Facebook.
And there have been many more examples of violent, threatening and bullying behaviour on the platform that have gone unchallenged too.
Although it has the biggest user base, Facebook is not the only social media company experiencing this problem.
For many years Twitter has been castigated for not doing enough to curb the activities of trolls and bullies.
Many well-known celebs have quit the micro-blogging platform over the years after being targeted and harassed by people so brave they will not even reveal their identity.
And for every well-known person being bullied in the spotlight you can be sure there are dozens of other less recognised users being attacked too.
YouTube has also been at the eye of the storm.
It has been criticised in the past for not doing enough to take down videos of beheadings and extremist propaganda.
Then in March it emerged that large companies and publicly funded agencies were unwittingly placing ads beside such videos on the site, financially supporting them in the process.
Even this week, as the desperately tragic events unfolded around the arena bombing in Manchester, social media was misused by some to compound the misery of others.
Pictures appeared online of individuals it was claimed were missing, when in fact they weren’t.
Fake rumors were also maliciously spread on social networks, causing further panic and concern.
The issue is far from confined to Facebook, Twitter and YouTube.
But given their size and influence, they attract more than their fair share of the problem and by extension are therefore an important part of the solution.
To be fair all have made some efforts in recent times to start addressing the issue in a more meaningful way than before.
Facebook is recruiting another 3,000 moderators to scan the network for hate speech, child exploitation and those at risk of self-harm or suicide.
That is on top of the 4,500 it already has in place.
But revelations in The Guardian this week about the, at times, flimsy standards those moderators must apply to content they are asked to review raise further troubling questions about Facebook’s commitment to stamping out the problems.
The moderators often have to make judgements in a matter of seconds about possibly disturbing images and potentially threatening posts.
And some of what seems to pass as acceptable in Facebook’s eyes would be considered pretty unacceptable to many of its users.
Twitter too has beefed up its response to inappropriate content on the platform.
New tools have been deployed, including artificial intelligence, to prevent trolls from wreaking havoc.
CEO Jack Dorsey recently told RTÉ News that the company is now happier with how it is handling the issue than it was, but more work remains to be done.
YouTube has also introduced new measures to prevent companies from inadvertently supporting objectionable content producers.
It clearly could do more to ensure that objectionable content does not make it onto the platform at all.
The free speech argument is undoubtedly a tricky tightrope for social media firms to walk.
It is important that measures to protect users are proportionate and do not become conflated with censorship.
But if Facebook, Twitter and YouTube’s recent response to their particular crises shows anything, it is that we as users, advertisers, sharers and influencers have the power to force a change – particularly where there is a threat of lost revenue involved.
The reality is that if we want to continue to enjoy using social platforms, free from the threat of trolls and the upset caused by happening upon inappropriate content, then we need to pressure those in charge of them to do more.
Sure, states could do a lot more to regulate the sector.
Here the Government is talking about setting up a new watchdog to tackle social media harassment.
Just this week the chairman of the Press Council warned the proliferation of fake news on social networks posed a threat to press freedom and democracy.
The EU too is considering extending the same kind of standards of taste and decency imposed on traditional broadcasters on technology companies who also publish video content.
Such regulation could help, particularly if it included the power to impose stiff fines – hitting the companies where it hurts most.
But that will only go so far.
Ultimately by continuing to use social platforms in the face of inaction by those running them, we as users at best fail to condemn or at worst tacitly endorse the poor efforts they are making to address the issues.
Instead of throwing our eyes to heaven and moving on to the next post when we see inappropriate content, bullying or fake news on social platforms, we should be doing something about it.
Call it out. Report it. Block it. Even consider stopping using the particular network if it does not improve.
After all, the success, the attraction and ultimately the value of social media is built on connections between people.
And if those people do not shout stop, then nothing will ever change.
Comments welcome via Twitter to @willgoodbody
[“Source-ndtv”]