IF the internet is seen as a lawless place does that mean that online organisations are above the law?

Google certainly seems to think so. Not content with using our entire online history to build profiles to better target us for money-making advertising, the $136.8 billion turnover business has drawn up plans to encrypt its web browser Chrome in a move that could pose a serious risk to children. Though supporters of the move reckon it will help foster privacy and security, encryption will also make it harder for internet providers to detect and block harmful material like terrorist propaganda or images of child abuse or suicide, bypassing parental control systems in the process.

Coming months after a string of parents implicated social media sites in their children’s suicides and called for rules on the removal of harmful images to be introduced, Google’s timing could not have been any worse. But if, as experts have warned, the encryption allows Google to harvest even more details about users’ browsing habits - enabling it to make even more money from its advertisers - what motivation does it have to desist? In the absence of any specific laws, none at all.

Not that Google is alone. When Ian Russell spoke out about his daughter Molly’s suicide at the beginning of this year he said he was in no doubt that seeing material linked to self-harm and suicide on sites including Pinterest and Instagram had contributed to her death. Though he spoke of the need for an independent regulator to ensure distressing material is swiftly removed from social media sites, the response was lukewarm. The social media companies reiterated how loath they are to intervene in users’ activity, while health secretary Matt Hancock simply wrote to those businesses advising them to “purge” harmful content from their sites “once and for all”, offering no incentive for them to do so or disincentive not to.

Disincentives are key, though. Facebook has long made a virtue of how willing it is to remove inappropriate content from its site, noting that between April and September last year it took down 1.5 billion fake accounts that breached its internal rules on everything from adult nudity and sexual activity to bullying and the sexual exploitation of children. Yet in March an Islamaphobic terrorist was able to live-stream his murderous rampage through a New Zealand mosque, with copies of the footage still available on countless Facebook accounts hours after an intervention from the police. Perhaps unsurprisingly, when a wave of suicide bombings hit Sri Lanka at the weekend, its government moved swiftly to shut down Facebook and other social media sites in a bid to stop the spread of misinformation inciting further violence.

Read more: Mental health support plan for girls in Scotland affected by social media

It is exactly the kind of action that is needed to prompt these internet giants to take greater responsibility for the worlds they have created. No matter how much we shout about what is right and proper, it is only when such corporations are faced with the very real prospect of being cut off from large and lucrative markets that they will bother to take note. Facebook proved as much earlier this month when it sprung into action after the Indian government ordered it to remove around 700 fake accounts linked to election interference. There’s nothing like the prospect of being shut out of the second most populous country on earth to focus the corporate mind.

The UK Government appears to be taking note too. Three months after Mr Russell’s plea and a week after the action against Facebook in India it published a white paper outlining plans for a new regulator that would crack down on issues ranging from hate speech to cyberbullying and election interference. Though the regulations would require internet businesses themselves to take “reasonable steps to keep users safe”, the plans envisage giving the regulator a full range of enforcement tools.

Significantly, in addition to being able to levy “substantial fines” it would also be given the power to force businesses to withdraw certain services and, crucially, hold senior managers personally liable for any breaches.

At this stage the proposals are very far from becoming law, though. And even if they do, the experience of the EU-wide General Data Protection Regulation (GDPR), introduced last year on a promise of ensuring our personal details would be kept safe online, would suggest they may have little impact. Indeed, despite that regulation promising a tough regime of fines, businesses of all stripes are continuing to suffer data breaches on a daily basis. So far only France has imposed a fine of any note, ordering Google to pay €50 million for, among other things, failing to sufficiently inform users how their data would be used to personalise ads.

But it’s time the law got tough. The internet has revolutionised our lives and the likes of Google have much to be praised for,. That doesn’t mean they should be allowed to operate unchecked. As well as strict rules on what these organisations can and can’t do we need sanctions that aren’t just stringent but readily enforced too. The safety of our children depends on it.