The German government has voted in favor of a new law that includes, among other things, fines of up to €50 million to social media companies that fail to remove ‘obviously illegal’ content in a set time frame.
The new law, known as NetzDG, will be effective in October of this year and outlines strict timelines for the removal of illegal and offensive content from social media platforms.
A social media site with more than 2 million users must remove obviously illegal content, including hate speech, within 24 hours. Controversial content, which may or may not qualify as illegal, must be reviewed and evaluated for removal within one week.
Any failure to remove illegal content as specified may result in fines that range from €5 million for the individual responsible for the content, to €50 million for the company.
Critics have argued that the NetzDG regulation prohibits freedom of speech and that it infringes on EU law because it attempts to subvert the principle of country of origin. The country of origin principle states that service providers in the EU can only be subject to regulations from the member state in which they are established.
Also, because there is no clarity regarding ‘obviously’ illegal content, with the risk of extremely high fines the concern arises that a company’s fallback position will be to delete any questionable content, further eroding free speech.
The NetzDG law must still be ratified by the EU Commission, where it could be blocked if found to breach the principle of origin, which is grounded in the EU e-commerce directive or to be contrary to other aspects of EU law.
German Minister of Justice Heiko Maas said that while the NetzDG does not solve all problems associated with removing illegal content from social media, it was necessary to create a legal method to address the problem because without it, “the large platform operators would not fulfill their obligations” to remove it.
However, a report issued earlier this month by the European Commission showed that voluntary participation by technology companies in the effort to remove hate speech had shown marked results in the previous six months. 59% of the time, companies removed content that had been flagged as hate speech, more than twice the level of the prior period. Content reviewed within 24 hours of notification increased from 40% to 51%, showing that participating companies improved their reaction time for complaints. The general trend for improvement based on voluntary participation of social media companies seems to undercut the necessity for stringent regulations governing content.