Skip to main content
Clear icon
55º

EU law targets Big Tech over hate speech, disinformation

FILE - The logo of Google is displayed on a carpet at the entrance hall of Google France in Paris, Monday, Nov. 18, 2019. European Union officials are nearing agreement on a set of new rules aimed at protecting internet users by forcing big tech companies like Google and Facebook to step up their efforts to curb the spread of illegal content. EU officials were negotiating over the final details of the new legislation, dubbed the Digital Services Act, on Friday, April 22, 2022. (AP Photo/Michel Euler, File) (Michel Euler, Copyright 2021 The Associated Press. All rights reserved)

BRUSSELS – Big tech companies like Google and Facebook parent Meta will have to police their platforms more strictly to better protect European users from hate speech, disinformation and other harmful online content under landmark EU legislation approved early Saturday.

European Union officials clinched the agreement in principle on the Digital Services Act after lengthy final negotiations that began Friday. The law will also force tech companies to make it easier for users to flag problems, ban online ads aimed at kids and empower regulators to punish noncompliance with billions in fines.

Recommended Videos



The Digital Services Act, one half of an overhaul for the 27-nation bloc's digital rulebook, helps cement Europe’s reputation as the global leader in efforts to rein in the power of social media companies and other digital platforms.

“With the DSA, the time of big online platforms behaving like they are ‘too big to care’ is coming to an end,” said EU Internal Market Commissioner Thierry Breton.

EU Commission Vice President Margrethe Vestager added that “with today’s agreement we ensure that platforms are held accountable for the risks their services can pose to society and citizens.”

The act is the EU’s third significant law targeting the tech industry, a notable contrast with the U.S., where lobbyists representing Silicon Valley’s interests have largely succeeded in keeping federal lawmakers at bay.

While the Justice Department and Federal Trade Commission have filed major antitrust actions against Google and Facebook, Congress remains politically divided on efforts to address competition, online privacy, disinformation and more.

The EU’s new rules should make tech companies more accountable for content created by users and amplified by their platforms’ algorithms.

The biggest online platforms and search engines, defined as having more than 45 million users, will face extra scrutiny.

Breton said they will have plenty of stick to back up their laws, including “effective and dissuasive" fines of up to 6% of a company's annual global revenue, which for big tech companies would amount to billions of dollars. Repeat offenders could be banned from the EU, he said.

The tentative agreement was reached between the EU parliament and the bloc's member states. It still needs to be officially rubber-stamped by those institutions, which is expected after summer but should pose no political problem. The rules then won't start applying until 15 months after that approval, or Jan. 1, 2024, whichever is later.

“The DSA is nothing short of a paradigm shift in tech regulation. It’s the first major attempt to set rules and standards for algorithmic systems in digital media markets,” said Ben Scott, a former tech policy advisor to Hillary Clinton who’s now executive director of advocacy group Reset.

The need to regulate Big Tech more effectively came into sharper focus after the 2016 U.S. presidential election, when Russia used social media platforms to try to influence voters. Tech companies like Facebook and Twitter promised to crack down on disinformation, but the problems have only worsened. During the pandemic, health misinformation blossomed and again the companies were slow to act, cracking down after years of a llowing anti-vaccine falsehoods to thrive on their platforms.

Under the EU law, governments would be able to ask companies take down a wide range of content that would be deemed illegal, including material that promotes terrorism, child sexual abuse, hate speech and commercial scams. Social media platforms like Facebook and Twitter would have to give users tools to flag such content in an “easy and effective way” so that it can be swiftly removed. Online marketplaces like Amazon would have to do the same for dodgy products, such as counterfeit sneakers or unsafe toys.

These systems will be standardized to work the same way on any online platform.

Germany’s justice minister said the rules would safeguard freedom of speech online by ensuring sites can be made to review decisions on deleting posts. At the same time, they'll be required to prevent their platforms being misused, said Marco Buschmann.

“Death threats, aggressive insults and incitement to violence aren’t expressions of free speech but rather attacks on free and open discourse,” he said.

Tech companies, which had furiously lobbied Brussels to water down the legislation, responded cautiously.

Twitter said it would review the rules “in detail” and that it supports “smart, forward thinking regulation that balances the need to tackle online harm with protecting the Open Internet.”

TikTok said it awaits the act's full details but “we support its aim to harmonize the approach to online content issues and welcome the DSA’s focus on transparency as a means to show accountability."

Google said it looks forward to “working with policymakers to get the remaining technical details right to ensure the law works for everyone.” Amazon referred to a blog post from last year that said it welcomed measures that enhance trust in online services. Facebook didn’t respond to a request for comment.

The Digital Services Act bans ads targeted at minors, as well as ads based on users' gender, ethnicity or sexual orientation. It also bans deceptive techniques companies use to nudge people into doing things they didn’t intend to, such as signing up for services that are easy to opt into, but hard to decline.

To show they’re making progress on limiting these practices, tech companies would have to carry out annual risk assessments of their platforms.

Up until now, regulators have had no access to the inner workings at Google, Facebook and other popular services. But under the new law, the companies will have to be more transparent and provide information to regulators and independent researchers on content-moderation efforts. This could mean, for example, making YouTube turn over data on whether its recommendation algorithm has been directing users to more Russian propaganda than normal.

To enforce the new rules, the EU's executive Commission is expected to hire more than 200 new staffers. To pay for it, tech companies will be charged a “supervisory fee."

Experts said the new rules will likely spark copycat regulatory efforts by governments in other countries, while tech companies will also face pressure to roll out the rules beyond the EU’s borders.

“If Joe Biden stands at the podium and says ‘By golly, why don’t American consumers deserve the same protections that Google and Facebook are giving to Europe consumers,’ it’s going to be difficult for those companies to deny the application of the same rules” elsewhere, Scott said.

But they're unlikely to do so voluntarily, said Zach Meyers, senior research fellow at the Centre for European Reform think tank. There is just too much money on the line if a company like Meta, which owns Facebook and Instagram, is restricted in how it can target advertising at specific groups of users.

“The big tech firms will heavily resist other countries adopting similar rules, and I cannot imagine the firms voluntarily applying these rules outside the EU,” Meyers said.

The EU reached a separate agreement last month on its Digital Markets Act, a law aimed at reining in the market power of tech giants and making them treat smaller rivals fairly.

And in 2018, the EU’s General Data Protection Regulation set the global standard for data privacy protection, though it has faced criticism for not being effective at changing the behavior of tech companies. Much of the problem centers on the fact that a company’s lead privacy regulator is in the country where its European head office is located, which for most tech companies is Ireland.

Irish regulators have opened dozens of data-privacy investigations, but have only issued judgments for a handful. Critics say the problem is understaffing, but the Irish regulator says the cases are complex and time-consuming.

EU officials say they have learned from that experience and will make the Commission the enforcer for the Digital Services Act and Digital Markets Act.

__

AP Business Writer Kelvin Chan reported from London. AP Technology Writer Barbara Ortutay in Oakland, California, and Frank Jordans in Berlin contributed to this story.

___

See all of AP’s tech coverage at https://apnews.com/hub/technology.


Loading...