[ad_1]
WASHINGTON — Former President Donald J. Trump called multiple times for repealing the law that shields tech companies from legal responsibility over what people post. President Biden, as a candidate, said the law should be “revoked.”
But the lawmakers aiming to weaken the law have started to agree on a different approach. They are increasingly focused on eliminating protections for specific kinds of content rather than making wholesale changes to the law or eliminating it entirely.
That has still left them a question with potentially wide-ranging outcomes: What, exactly, should lawmakers cut?
One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.
Even these more modest proposals to the legal shield, Section 230 of the Communications Decency Act, could ripple across the internet. The adjustments could give companies like Facebook and YouTube an incentive to take down certain types of content while leaving up others. Critics of the ideas also say there is a huge potential for unintended consequences, citing a 2018 law that stripped the immunity from platforms that knowingly facilitated sex trafficking, making some sex work more unsafe.
“I think we are trying to say, ‘How can you narrowly draw some exceptions to 230 in a way that doesn’t interfere with your free speech rights?’” said Senator Mark Warner of Virginia, who has introduced legislation to trim the law with a fellow Democrat, Senator Mazie K. Hirono of Hawaii.
The calls for change gained momentum after the Jan. 6 attack on the Capitol, which was carried out in part by people linked to QAnon and other conspiracy theories that thrive on social media. Critics say the shield has let the tech giants ignore criminal activity, hate speech and extremist content posted on their services.
The law protects websites from many lawsuits over content posted by their users or the way sites choose to moderate that content. Passed in 1996, it enabled the rise of large online services because they didn’t need to assume new legal liability each time they added another one of their billions of users.
Major tech companies have said they are open to trimming the law, an effort to shape changes they see as increasingly likely to happen. Facebook and Google, the owner of YouTube, have signaled that they are willing to work with lawmakers changing the law, and some smaller companies recently formed a lobbying group to shape any changes.
Some small steps — like pushing for content to be taken down after a court order — could earn the support of tech companies. But others, like stripping immunity from all ads, would probably not.
Many lawmakers say creating carve-outs to the law would allow them to tackle the most pernicious instances of disinformation or hate speech online without disrupting the entire internet economy, steamrollering small websites or running afoul of free speech rights.
“There isn’t any legislation that deals with everything,” said Representative Anna G. Eshoo, a California Democrat who has proposed carving out certain content from the law. “When someone says eliminate Section 230, the first thing it says to me is that they don’t really understand it.”
But there are many other unresolved issues. Lawmakers must decide how close they want to get to the core business models of the platforms versus just encouraging better moderation. One way to cut to the core would be to limit the shield when a post is amplified by the proprietary algorithms that rank, sort and recommend content to users, as Ms. Eshoo’s bill would in some cases. Or, as Mr. Warner’s bill does, lawmakers could simply say Section 230 shouldn’t apply to any ads at all.
And they must grapple with the question of whether any changes should apply only to the biggest platforms, like Facebook and YouTube, or take effect across the entire internet. Smaller companies have argued that they should be exempt from many changes.
“I think we want to take as modest of a step as possible,” said Hany Farid, a professor at the University of California, Berkeley, who researches misinformation. “Give it a year or two, see how it unfolds and make adjustments.”
The lawmakers’ focus on targeted changes to the law is a familiar one. In 2018, Congress passed a law that removed Section 230 protections when platforms knowingly facilitated sex trafficking.
But Mr. Trump was focused on repealing the law. In his final weeks in the White House, he pushed congressional Republicans to end the protections in an unrelated defense funding bill. His supporters and allies may not be satisfied by the targeted changes proposed by the Democrats who now control both the Senate and the House.
The White House did not immediately offer a comment on the issue on Monday. But a December op-ed that was co-written by Bruce Reed, Mr. Biden’s deputy chief of staff, said that “platforms should be held accountable for any content that generates revenue.” The op-ed also said that while carving out specific types of content was a start, lawmakers would do well to consider giving platforms the entire liability shield only on the condition that they properly moderate content.
Supporters of Section 230 say even small changes could hurt vulnerable people. They point to the 2018 anti-trafficking bill, which sex workers say made it harder to vet potential clients online after some of the services they used closed, fearing new legal liability. Instead, sex workers have said they must now risk meeting with clients in person without using the internet to ascertain their intentions at a safe distance.
Senator Ron Wyden, the Oregon Democrat who co-wrote Section 230 while in the House, said measures meant to address disinformation on the right could be used against other political groups in the future.
“If you remember 9/11, and you had all these knee-jerk reactions to those horrible tragedies,” he said. “I think it would be a huge mistake to use the disgusting, nauseating attacks on the Capitol as a vehicle to suppress free speech.”
Industry officials say carve-outs to the law could nonetheless be extremely difficult to carry out.
“I appreciate that some policymakers are trying to be more specific about what they don’t like online,” said Kate Tummarello, the executive director of Engine, an advocacy group for small companies. “But there’s no universe in which platforms, especially small platforms, will automatically know when and where illegal speech is happening on their site.”
The issue may take center stage when the chief executives of Google, Facebook and Twitter testify late this month before the House Energy and Commerce Committee, which has been examining the future of the law.
“I think it’s going to be a huge issue,” said Representative Cathy McMorris Rodgers of Washington, the committee’s top Republican. “Section 230 is really driving it.”
[ad_2]