In an effort to detect and prevent the sharing of harmful content in messages, LinkedIn provides members with an optional advanced safety feature. This feature, when enabled, allows LinkedIn’s automated machine learning models to detect likely harmful content within messages. While we always work to protect members by proactively identifying malware and viruses, these advanced models serve to protect against a wider range of policy violations. These violations include, but are not limited to, sexual harassment in the form of text, images and video, and other activities like the intention to move conversations off LinkedIn.
If the automated systems detect likely harmful content in a message from a sender with whom you have had a previous messaging communication, the message containing the content will be hidden by a warning. The warning can be dismissed, giving you the ability to view and report the message if desired.
How can I control when LinkedIn applies its automated systems to my incoming messages to detect harmful message content?
To turn harmful message detection on or off:
-
Click the Me icon at the top of your LinkedIn homepage.
-
Select Settings & Privacy from the dropdown.
-
Click Data privacy on the left side of the page.
-
Click Harmful message detection under the Messaging Experience section.
-
Use the toggle to turn this feature on or off.
To turn harmful message detection on or off:
-
Tap your profile picture.
-
Tap Settings.
-
Tap Data privacy.
-
Tap Harmful message detection in the Messaging Experience section.
-
Use the toggle to turn the feature on or off.
-
Tap your profile picture.
-
Tap Settings.
-
Tap Data Privacy.
-
Tap Harmful message detection in the Messaging Experience section.
-
Use the toggle to turn the feature on or off.