Last reviewed 24 November 2021

Healthcare providers and members of staff are increasingly facing abusive comments on social media platforms. Christine Grey details how to manage inappropriate posts and support your staff who become the victims of any unfairness or online abuse.

At times, comments can be inappropriate, inaccurate or simply unfair, causing distress, and harm to staff and services that receive them. Increasingly, organisations need to know how to manage inappropriate posts and support their staff who become the victims of any unfairness or online abuse.

Increasing levels of online abuse

The British Medical Association (BMA) addressed a letter to Health and Care Secretary Sajid Javid in September, urging him to take preventative action against GP practice abuse by legislating for heavier punishments for verbal abuse, even when the threat of physical violence is not present.

It seems that the additional pressures brought by the Covid-19 pandemic have aggravated “anti-GP” rhetoric in the media. BMA GP Committee Chair Dr Richard Vautrey has responded to a growing narrative saying it risked “fuelling a climate of spiralling abuse” that is damaging to both doctors and their patients, and accused calls from the Government for face-to-face consultations to “begin again” of being patently “wrong”.

An Online Safety Bill will be laid before parliament in the 2021 to 2022 session. If this passes into law, online service providers will be subject to new duties to minimise abusive content on their platform and be given timescales for the removal of that content.

In the meantime, the effect of misinformation can be seen in increasing levels of abuse of staff on websites and social media. Against this backdrop, it can be difficult to know how to support colleagues at the coalface when they experience abusive messaging.

The definition of online abuse

For the purposes of the BMA’s own guidance, “abuse” is seen as behaviour directed at staff by patients or service users, their friends or family, that is “unwarranted and deliberately intended to upset, threaten, bully or otherwise cause distress and aggravation”. “Online abuse” occurs when a device is used, such as a computer or mobile phone to send or post abusive messages over the internet.

Messages are considered to be a form of “harassment” if an employee is subjected to two or more connected abusive posts online which cause them distress or alarm.

It is also considered an offence in law for a person to send online messages that contain threats, or are grossly offensive, obscene or menacing, where the author’s intention is to cause the recipient distress or anxiety. The offence of “false information” is also considered if a person posts an online communication that they know to be untrue.

Steps to take when experiencing abusive communications

It is important to have a clear organisational policy on dealing with abusive comments online. This should be discussed with staff and applied consistently. Staff should also be given information and advice about their rights and the risks associated with any course of action they might want to take. If anyone is worried about how to deal with an online situation, they should always speak with their data protection officer (DPO) and line manager.

When abusive communications are received, an organisation needs to be focused on recording, reporting within the organisation, investigating and then considering any action carefully.

The first step is to save screenshots, and record and collect posts. Staff must be careful not to enquire about the person posting online or look through any personal files to try to identify them.

Early reporting of any online abuse to a manager and DPO will help staff at the start of the process and they can be advised appropriately. The manager and the DPO should investigate any case where the identity of the patient or service user is known before any response is issued, as there could be underlying reasons for their behaviour.

Considering an appropriate response

Members of staff should avoid giving an immediate response without discussing it first. If the person posting the online message can be identified by the DPO and it is considered appropriate to contact them, organisations should respond directly by asking them to amend or remove the comment.

Advice on possible courses of action is available from the Information Commissioner’s Office helpline: 0303 123 1113. Support could also be sought from a clinical commissioning group (CCG) communications team, local medical committee (LMC), or medical defence organisation, if necessary.

If the abuse is grossly disturbing, offensive or shocking, the police may need to be involved. The Protection from Harassment Act 1997 covers more serious offences of criminal harassment.

Understanding the offences of harassment, defamation and malicious falsehood is complex, and legal advice should be sought if considering using the law to pursue a remedy.

Taking civil action to remove messages from websites

If a comment that is considered “defamatory” is posted on a website, civil law can be used to get it modified or removed, including from websites like NHS Choices. For an online post to be considered “defamatory”, the Defamation Act 2013 says that it must have caused or be likely to cause “serious harm” to the reputation of the claimant.

If it is not possible to identify the person who posted the defamatory comment, a “Notice of Complaint” has to be sent to the website operator. This must include specific information, which is listed in the BMA’s guidance, Dealing with Unfair Comments on Websites.

Once the operator has received the formal notice asking for a post to be removed, there are legal requirements governing the way they must respond, in line with the Defamation (Operators of Websites) Regulations 2013. Once a website operator is put on notice that it is hosting defamatory material, the Electronic Commerce (EC Directive) Regulations 2002 require steps to be taken to remove the material or disable access to it, and failure to do so makes the operator liable for the publication and exposes them to a claim for damages.

Social media platforms have their own policies and conditions for dealing with defamatory or harassing content, and have channels for reporting it and getting it removed. This includes platforms such as Facebook, Twitter, Instagram, TikTok or YouTube.

Conclusion

Online abuse and misinformation are likely to continue and can have a damaging effect on members of staff. It can spiral out of control very quickly and can lead to more serious forms of abuse, so a combination of practical measures and prompt action, and legal advice where necessary, can help to prevent a situation from becoming unmanageable.