Social media use to intervene against contents «allegded to be infringing,without prior notice», at their sole discretion. A controversial regime applied in name of proper use and security
by Emanuele Bonini (English version of my Eunews article)
Copyright infringements, incitement to racial or ethnic hatred, cheesecake picture or even porn content, violence. There are several cases of back out and removal of what is posted on-line. Such activities are carried out by the social media themselves, but in most of cases the reporting of inadequate posts and messages is left to the internet users. But who can really say what is right and what is wrong? diversity of opinions and different degree of susceptibility, risk to trigger a short-circuit in the network made by ‘lightly’ complaints with immediate consequences. Twitter, Blogger (the Google blog platform), Facebook and Istagram all have more or less the same conditions and terms of use (ToU) of their services. There is of course, as it’s right it is, a control of surfers. Nevertheless this policy control poses a certain question marks on the way it is applied, since in name of a ‘clean’ and safe web the risk of a real preventive censorship is there.
Presumption of guilt, preventive and discretionary removal
The network is vast, and the number of users – especially those of certain social media – virtually boundless. Monitoring everything that is written is not an easy task. And then the principle of preventive intervention applies. Blogger offers an example. The team can send at any time e-mails like the following:
The DMCA is a US copyright law that provides guidelines for online service provider liability in case of copyright infringement. What has to be noticed is that intervetions are made on the basis of possible infringement. Is a post ‘alleged to infringe’ the rules or in them violation? There is a huge difference between having an illegal situation or a situation thought to be illegal. Anyway, in case of doubt the giants of the internet carry out preventive censorship.
Having a look to the Twitter ToU, it is possible to read that «We reserve the right to remove Content alleged to be infringing without prior notice, at our sole discretion, and without liability to you». In return of being ‘responsibilities-free’, Twitter can basically do whatever the social media likes. Once again rules refer to contents presumed to be irregular, to be treated with immediate and not-communicated removal. The same policy applied to Facebook services. Once again, in case of a report from a rights owner claiming content on Facebook infringes their intellectual property rights, «we may need to promptly remove that content from Facebook without contacting you first. If you believe the content should not have been removed, you can follow up with them directly to try to resolve the issue». First complain, then delete. Only after try to negotiate. Instagram doesn’t act differently. «We reserve the right to modify or terminate the Service or your access to the Service for any reason, without notice, at any time, and without liability to you», states the terms of use of the photo-sharing service.
The reporting issue
What seen so far is about copyright (although Facebook has scrict rules for too sexy pictures and porn content), but in times of fake news and terrorism the need for new and further controls came out. Countering illegal online hate speech passes through the constant check of the web and the removal of any kind of such a message. National security reasons thus risks to lead to even stricter policies. Already now, according to its ToU, Google «reserves the right (but shall have no obligation) to pre-screen, review, flag, filter, modify, refuse or remove any or all content from any service» offerd to customers, and it is easy to guess that in name of security such a scheme will put in place more and more frequently. It has to be recalled that according to the on-line operators themselves, there are a lot of difficulty in assessing everything posted on the internet («we may not monitor or control the Content posted via the Services», say the Twitter ToU ), so the most effective way to detect illegal content is to collect complains and notifications from surfers. Is that a really effective system? Of course everybody can and must, at every moment, report about an unacceptable messagge. This a civic duty, no doubt on that, which can turn into a controversial tool. In fact, contents hitting the sensibility of the reader not always are ‘universal’, since sensibility is personal and subjcetive. What can appear as nasty to a certain person could not provoke the same feeling - and the same reaction - to other people. In case of doubt and with no clear dispositions on the matter, social media can intervene on everything, at any moment, in total discretion.
Assessment duty, the EU goes faster
Everytime a report on a potential rules infringment is forwarded, assessments start immediately. Social media must do that, and in doing so can temporally remove the post for all the time needed. Can decorum, security and copyritghts justify all this? The matter is a very tricky one, and the red line between proper use of the net and preventive censorship is very thin.
The European Union made an alliance with the major social media (Facebook, Twitter, YouTube and Microsoft, with Instagram e Google+ to join soon) in order to detect and remove all messagges instigating to hate, violence, racism and discrimination. In 2017, 2,982 notifications were submitted to the IT companies taking part to the special alliance. According to the EU figures, 70% of these messagges (2,087) have been removed. More than 81% of the assessments took place within 24 hours. Anyway, assessments remain discretional even in case of (National) Authorities responsible for the monitoring of the web. One example in this sense was a tweet from Matteo Salvini, leader of the Italian party Northern League. In his tweet he was able to invite the civil society to hunt for immigrants. The tweet was not removed. Vera Jourova, Commissioner for Justice, Consumers and Gender Equality, pointed out that «in case of doubt messagges are left on-line», because Europe decided not to intervene a preventive way. I beg you pardon, wasn’t Salvini’s tweet an hate speech? This example clearly shows how censorship can be unsafe and even not fair. This example clearly shows how censorship can be unsafe and even not fair. Furthermore, the non-preventive approach promoted by the European Commission apparently doesn’t apply for copyrights related issues. So be careful, big brother is watching you.
by Emanuele Bonini (English version of my Eunews article)
Copyright infringements, incitement to racial or ethnic hatred, cheesecake picture or even porn content, violence. There are several cases of back out and removal of what is posted on-line. Such activities are carried out by the social media themselves, but in most of cases the reporting of inadequate posts and messages is left to the internet users. But who can really say what is right and what is wrong? diversity of opinions and different degree of susceptibility, risk to trigger a short-circuit in the network made by ‘lightly’ complaints with immediate consequences. Twitter, Blogger (the Google blog platform), Facebook and Istagram all have more or less the same conditions and terms of use (ToU) of their services. There is of course, as it’s right it is, a control of surfers. Nevertheless this policy control poses a certain question marks on the way it is applied, since in name of a ‘clean’ and safe web the risk of a real preventive censorship is there.
Presumption of guilt, preventive and discretionary removal
The network is vast, and the number of users – especially those of certain social media – virtually boundless. Monitoring everything that is written is not an easy task. And then the principle of preventive intervention applies. Blogger offers an example. The team can send at any time e-mails like the following:
«Blogger has been
notified, according to the terms of the Digital Millennium Copyright
Act (DMCA), that certain content in your blog is alleged to infringe
upon the copyrights of others. As a result, we have reset the post(s)
to "draft" status. (If we did not do so, we would be
subject to a claim of copyright infringement, regardless of its
merits). If
you believe that you have the rights to post the content at issue
here, you can file a counter-claim».
The DMCA is a US copyright law that provides guidelines for online service provider liability in case of copyright infringement. What has to be noticed is that intervetions are made on the basis of possible infringement. Is a post ‘alleged to infringe’ the rules or in them violation? There is a huge difference between having an illegal situation or a situation thought to be illegal. Anyway, in case of doubt the giants of the internet carry out preventive censorship.
Having a look to the Twitter ToU, it is possible to read that «We reserve the right to remove Content alleged to be infringing without prior notice, at our sole discretion, and without liability to you». In return of being ‘responsibilities-free’, Twitter can basically do whatever the social media likes. Once again rules refer to contents presumed to be irregular, to be treated with immediate and not-communicated removal. The same policy applied to Facebook services. Once again, in case of a report from a rights owner claiming content on Facebook infringes their intellectual property rights, «we may need to promptly remove that content from Facebook without contacting you first. If you believe the content should not have been removed, you can follow up with them directly to try to resolve the issue». First complain, then delete. Only after try to negotiate. Instagram doesn’t act differently. «We reserve the right to modify or terminate the Service or your access to the Service for any reason, without notice, at any time, and without liability to you», states the terms of use of the photo-sharing service.
The reporting issue
What seen so far is about copyright (although Facebook has scrict rules for too sexy pictures and porn content), but in times of fake news and terrorism the need for new and further controls came out. Countering illegal online hate speech passes through the constant check of the web and the removal of any kind of such a message. National security reasons thus risks to lead to even stricter policies. Already now, according to its ToU, Google «reserves the right (but shall have no obligation) to pre-screen, review, flag, filter, modify, refuse or remove any or all content from any service» offerd to customers, and it is easy to guess that in name of security such a scheme will put in place more and more frequently. It has to be recalled that according to the on-line operators themselves, there are a lot of difficulty in assessing everything posted on the internet («we may not monitor or control the Content posted via the Services», say the Twitter ToU ), so the most effective way to detect illegal content is to collect complains and notifications from surfers. Is that a really effective system? Of course everybody can and must, at every moment, report about an unacceptable messagge. This a civic duty, no doubt on that, which can turn into a controversial tool. In fact, contents hitting the sensibility of the reader not always are ‘universal’, since sensibility is personal and subjcetive. What can appear as nasty to a certain person could not provoke the same feeling - and the same reaction - to other people. In case of doubt and with no clear dispositions on the matter, social media can intervene on everything, at any moment, in total discretion.
Assessment duty, the EU goes faster
Everytime a report on a potential rules infringment is forwarded, assessments start immediately. Social media must do that, and in doing so can temporally remove the post for all the time needed. Can decorum, security and copyritghts justify all this? The matter is a very tricky one, and the red line between proper use of the net and preventive censorship is very thin.
The European Union made an alliance with the major social media (Facebook, Twitter, YouTube and Microsoft, with Instagram e Google+ to join soon) in order to detect and remove all messagges instigating to hate, violence, racism and discrimination. In 2017, 2,982 notifications were submitted to the IT companies taking part to the special alliance. According to the EU figures, 70% of these messagges (2,087) have been removed. More than 81% of the assessments took place within 24 hours. Anyway, assessments remain discretional even in case of (National) Authorities responsible for the monitoring of the web. One example in this sense was a tweet from Matteo Salvini, leader of the Italian party Northern League. In his tweet he was able to invite the civil society to hunt for immigrants. The tweet was not removed. Vera Jourova, Commissioner for Justice, Consumers and Gender Equality, pointed out that «in case of doubt messagges are left on-line», because Europe decided not to intervene a preventive way. I beg you pardon, wasn’t Salvini’s tweet an hate speech? This example clearly shows how censorship can be unsafe and even not fair. This example clearly shows how censorship can be unsafe and even not fair. Furthermore, the non-preventive approach promoted by the European Commission apparently doesn’t apply for copyrights related issues. So be careful, big brother is watching you.
No comments:
Post a Comment