Facebook is an example. Below is Facebook’s Community Standards regarding Dignity:
Dignity: We believe that all people are equal in dignity and rights. We expect that people will respect the dignity of others and not harass or degrade others.
Our Community Standards apply to everyone, all around the world, and to all types of content. They’re designed to be comprehensive – for example, content that might not be considered hateful may still be removed for violating a different policy. We recognize that words mean different things or affect people differently depending on their local community, language, or background. We work hard to account for these nuances while also applying our policies consistently and fairly to people and their expression. In the case of certain policies, we require more information and/or context to enforce in line with our Community Standards.
People can report potentially violating content, including Pages, Groups, Profiles, individual content, and comments. We also give people control over their own experience by allowing them to block, unfollow or hide people and posts.
The consequences for violating our Community Standards vary depending on the severity of the violation and the person’s history on the platform. For instance, we may warn someone for a first violation, but if they continue to violate our policies, we may restrict their ability to post on Facebook or disable their profile. We also may notify law enforcement when we believe there is a genuine risk of physical harm or a direct threat to public safety.
Our Community Standards are a guide for what is and isn’t allowed on Facebook. It is in this spirit that we ask members of the Facebook community to follow these guidelines.
Please note that the English version of the Community Standards reflects the most up to date set of the policies and should be used as the master document.
The definition of harassment is universally used both the daily practice and digital platforms. The Australian Human Rights Commission defines harassment as below:
Harassment can be against the law when a person is treated less favourably on the basis of certain personal characteristics, such as race, sex, pregnancy, marital status, breastfeeding, age, disability, sexual orientation, gender identity or intersex status. Some limited exemptions and exceptions apply.
Harassment can include behaviour such as:
- telling insulting jokes about particular racial groups
- sending explicit or sexually suggestive emails or text messages
- displaying racially offensive or pornographic posters or screen savers
- making derogatory comments or taunts about someone’s race
- asking intrusive questions about someone’s personal life, including his or her sex life.
Comparing what is a practical and theoretical approach, the policies made by digital companies are still not effective because even the authority cannot differentiate from thoughtful criticisms and harassments. This is given the rise of fake news and the whole lots of malicious accusations from this political party/politician/President to the other(s). This amplifies harassment and downgrade other people. While, of course, no one knows truths and fact-checking systems are human-made. Therefore harassment becomes a bigger term in terms of politics, by which most of us consider as fake news, false news and turn out political strategies or manipulations. Evenly, it derails politics as distasteful and distrustful areas of people’s lives.
To digital companies, the establishment of the algorithm to detect harassment is still being obstructed. Facebook policies allow people to block, unfriend and hide people’s comments if seen being harassed or discriminated or degraded one’s dignity. However, there is no one-size-fits-all policy. People from this country may see this sort of problem is normal. Others may see not normal. Even it may depend on the level of impact that people surfing online platforms suppose the fact of the development used to decide whether or not this/that is being harassment (individualistic views).
Therefore, if proposing harassment policies, the diversity of cultural perceptions should still be considered as the tipping point regarding national interests, legislations, and law enforcement. This is why when some countries requested Facebook or even other foreign internet companies to store their data within the country and open a local office. We know this is a countered approach but it is really happening right now.
- Especially about globalisation vs localisation for policy implementation. This makes the idea of a “dignity” policy really complex and difficult to enforce. Have I found anything that discusses actual responses to this policy and how effective it is? Do I think it is a PR exercise?
The PR issue, to me – I reckon, is not a big matter. If we look at the function of social network and the “if-then rules”, people see Facebook is too annoyed and disturbed, they will quit or use it less than other channels (despite they are still belonging or associated with Facebook) such as Instagram, Twitter, Messenger, or Google, etc.
Some relevant topics I found as below:
– Government requests for Facebook user data continue to increase worldwide
The U.S., India, UK, Germany and France were the most active in making data requests, accounting for 41 percent, 12 percent, nine percent, seven percent and six percent of the numbers, respectively. Each of those country’s government had more than 50 percent of requests granted, with the U.S. (85 percent), UK (90 percent) and France (74 percent) notable for higher rates.
– Facebook, Google okay with Vietnam’s cybersecurity law: official (VNExpress International, 2018)
– Exclusive: Vietnam cyber law set for tough enforcement despite Google, Facebook pleas (Reuters, 2018)
Vietnam’s cybersecurity law (passed on 12/6/2018 and in effect on Jan, 1st, 2019), which was approved by a majority vote in the National Assembly on Tuesday, requires foreign businesses like Facebook and Google to store Vietnamese users’ data within the nation’s territory and provide it authorities upon receipt of written requests.
Vietnamese lawmakers approved the new law in June overriding strong objections from the business community, rights groups and Western governments including the United States, who said the measure would undermine economic development, digital innovation and further stifle political dissent.
Alphabet Inc’s Google, Facebook and other big technology companies had hoped a draft decree on how the law would be implemented would soften provisions they find most objectionable.
– Customer Data: Designing for Transparency and Trust (Harvard Business Review, 2015)
As current and former executives at frog, a firm that helps clients create products and services that leverage users’ personal data, we believe this shrouded approach to data gathering is shortsighted. Having free use of customer data may confer near-term advantages. But our research shows that consumers are aware that they’re under surveillance—even though they may be poorly informed about the specific types of data collected about them—and are deeply anxious about how their personal information may be used.


– Stephen King quits Facebook over concerns of ‘false information’ (CNN, 2020)
“I’m quitting Facebook,” the author said on Twitter Friday. “Not comfortable with the flood of false information that’s allowed in its political advertising, nor am I confident in its ability to protect its users’ privacy. Follow me (and Molly, aka The Thing of Evil) on Twitter, if you like.”
– Facebook launches app that will pay users for their data (The Guadian, 2019)
“We believe this work is important to help us improve our products for the people who use Facebook,” Facebook said in a post announcing the launch of Study. “We also know that this kind of research must be clear about what people are signing up for, how their information will be collected and used, and how to opt out of the research at any time.”
Especially, the most interested area is compromising our data. This is to determine who wants to share their data with tech companies and what spectrum it would work for them as both of the parties and who would not share any interest here. What it means to sell our privacy will entitle as digital rights and human rights. As some countries are coming to submerge the political authoritarianism by asking citizens joining the facial recognition (the future will be) like Singapore and China, etc.
This puts the ideology too blunt as the level of adaptation is low between the human nature perceptions/civilizations and the avalanche of data mining. I see it lack a lot of legal frameworks here. It turns humans to be puppets in the eyes of some tech giants and those who in power.
