How Congress’s Extension of Section 702 May Expand the NSA’s Warrantless Surveillance Authority

How Congress’s Extension of Section 702 May Expand the NSA’s Warrantless Surveillance Authority
By David Ruiz

Last month, Congress reauthorized Section 702, the controversial law the NSA uses to conduct some of its most invasive electronic surveillance. With Section 702 set to expire, Congress had a golden opportunity to fix the worst flaws in the NSA’s surveillance programs and protect Americans’ Fourth Amendment rights to privacy. Instead, it reupped Section 702 for six more years.

But the bill passed by Congress and signed by the president, labeled S. 139, didn’t just extend Section 702’s duration. It also may expand the NSA’s authority in subtle but dangerous ways.

The reauthorization marks the first time that Congress passed legislation that explicitly acknowledges and codifies some of the most controversial aspects of the NSA’s surveillance programs, including “about” collection and “backdoor searches.” That will give the government more legal ammunition to defend these programs in court, in Congress, and to the public. It also suggests ways for the NSA to loosen its already lax self-imposed restraints on how it conducts surveillance.

Background: NSA Surveillance Under Section 702

First passed in 2008 as part of the FISA Amendments Act—and reauthorized last week until 2023—Section 702 is the primary legal authority that the NSA uses to conduct warrantless electronic surveillance against non-U.S. “targets” located outside the United States. The two publicly known programs operated under Section 702 are “upstream” and “downstream” (formerly known as “PRISM”).

Section 702 differs from other foreign surveillance laws because the government can pick targets and conduct the surveillance without a warrant signed by a judge. Instead, the Foreign Intelligence Surveillance Court (FISC) merely reviews and signs off on the government’s high-level plans once a year.

In both upstream and downstream surveillance, the intelligence community collects and searches communications it believes are related to “selectors.” Selectors are search terms that apply to a target, like an email address, phone number, or other identifier.

Under downstream, the government requires companies like Google, Facebook, and Yahoo to turn over messages “to” and “from” a selector—gaining access to things like emails and Facebook messages.

Under upstream, the NSA relies on Internet providers like AT&T to provide access to large sections of the Internet backbone, intercepting and scanning billions of messages rushing between people and through websites. Until recently, upstream resulted in the collection of communications to, from, or about a selector. More on “about” collection below.

The overarching problem with these programs is that they are far from “targeted.” Under Section 702, the NSA collects billions of communications, including those belonging to innocent Americans who are not actually targeted. These communications are then placed in databases that other intelligence and law enforcement agencies can access—for purposes unrelated to national security—without a warrant or any judicial review.

In countless ways, Section 702 surveillance violates Americans’ privacy and other constitutional rights, not to mention the millions of people around the world whose right to communications privacy is also ignored.

This is why EFF vehemently opposed the Section 702 reauthorization bill that the President recently signed into law. We’ve been suing since 2006 over the NSA’s mass surveillance of the Internet backbone and trying to end these practices in the courts. While S. 139 was described by some as a reform, the bill was really a total failure to address the problems with Section 702. Worse still, it may expand the NSA’s authority to conduct this intrusive surveillance.

Codified “About” Collection

One key area where the new reauthorization could expand Section 702 is the practice commonly known as “about” collection (or “abouts” collection in the language of the new law). For years, when the NSA conducted its upstream surveillance of the Internet backbone, it collected not just communications “to” and “from” a selector like an email address, but also messages that merely mentioned that selector in the message body.

This is a staggeringly broad dragnet tactic. Have you ever written someone’s phone number inside an email to someone else? If that number was an NSA selector, your email would have been collected, though neither you nor the email’s recipient was an NSA target. Have you ever mentioned someone’s email address through a chat service at work? If that email address was an NSA selector, your chat could have been collected, too.

“About” collection involves scanning and collecting the contents of Americans’ Fourth Amendment-protected communications without a warrant. That’s unconstitutional, and the NSA should never have been allowed to do it in the first place. Unfortunately, the FISC and other oversight bodies tasked with overseeing Section 702 surveillance often ignore major constitutional issues. 

So the FISC permitted “about” collection to go on for years, even though the collection continued to raise complex legal and technical problems. In 2011, the FISC warned the NSA against collecting too many “non-target, protected communications,” in part due to “about” collection. Then the court imposed limits on upstream, including in how “about” communications were handled. And when the Privacy and Civil Liberties Oversight Board issued its milquetoast report on Section 702 in 2014, it said that “about” collection pushed “the entire program close to the line of constitutional reasonableness.”

For its part, the NSA asserted that “about” collection was necessary technically to ensure the agency actually collected all the to/from communications it was supposedly entitled to.

In April 2017, we learned that the NSA’s technical and legal problems with “about” collection were even more pervasive than previously disclosed, and it had not been complying with the FISC’s already permissive limits. As a result, the NSA publicly announced it was ending “about” collection entirely. This was something of a victory, following years of criticism and pressure from civil liberties groups and internal government oversight. But the program suspension rested on technical and legal issues that may change over time, and not a change of heart or a controlling rule. Indeed, the suspension is not binding on the NSA in the future, since it could simply restart “about” collection once it figured out a “technical” solution to comply with the FISC’s limits.

Critically, as originally written, Section 702 did not mention “about” collection. Nor did Section 702 provide any rules on collecting, accessing, or sharing data obtained through “about” collection.

But the new reauthorization codifies this controversial NSA practice.

According to the new law, “The term ‘abouts communication’ means a communication that contains a reference to, but is not to or from, a target of an acquisition authorized under section 702(a) of the Foreign Intelligence Surveillance Act of 1978.”

Under the new law, if the intelligence community wants to restart “about” collection, it has a path to doing so that includes finding a way to comply with the FISC’s minimal limitations. Once that’s done, an affirmative act of Congress is required to prevent it. If Congress does not act, then the NSA is free to continue this highly invasive “about” collection.

Notably, by including collection of communications that merely “contain a reference to . . .  a target,” the new law may go further than the NSA’s prior practice of collecting communications content that contained specific selectors. The NSA might well argue that the new language allows them to collect emails that refer to targets by name or in other less specific ways, rather than actually containing a target’s email address, phone number, or other “selectors.”

Beyond that, the reauthorization codifies a practice that, up to now, has existed solely due to the NSA’s interpretation and implementation of the law. Before this year’s Section 702 reauthorization, the NSA could not credibly argue Congress had approved the practice. Now, if the NSA restarts “about” collection, it will argue it has express statutory authorization to do so. Explicitly codifying “about” collection is thus an expansion of the NSA’s spying authority.

Finally, providing a path to restart that practice absent further Congressional oversight, when that formal procedure did not exist before, is an expansion of the NSA’s authority.

For years, the NSA has pushed its boundaries. The NSA has repeatedly violated its own policies on collection, access, and retention, according to multiple, unsealed FISC opinions. Infamously, by relying on an unjustifiable interpretation of a separate statute—Section 215—the NSA illegally conducted bulk collection of Americans’ phone records for years. And even without explicit statutory approval, the NSA found a way to create this bulk phone record program and persuade the FISC to condone it, despite having begun the bulk collection without any court or statutory authority whatsoever. 

History teaches that when Congress gives the NSA an inch, the NSA will take a mile. So we fear that the new NSA spying law’s unprecedented language on “about” collection will contribute to an expansion of the already excessive Section 702 surveillance.

Codified Backdoor Searches

The Section 702 reauthorization provides a similar expansion of the intelligence community’s authority to conduct warrantless “backdoor searches” of databases of Americans’ communications. To review, the NSA’s surveillance casts an enormously wide net, collecting (and storing) billions of emails, chats, and other communications involving Americans who are not targeted for surveillance. The NSA calls this “incidental collection,” although it is far from unintended. Once collected, these communications are often stored in databases which can be accessed by other agencies in the intelligence community, including the FBI. The FBI routinely runs searches of these databases using identifiers belonging to Americans when starting—or even before officially starting—investigations into domestic crimes that may have nothing to do with foreign intelligence issues. As with the initial collection, government officials conduct backdoor searches of Section 702 communications content without getting a warrant or other individualized court oversight—which violates the Fourth Amendment.

Just as with “about” collection, nothing in the original text of Section 702 authorized or even mentioned the unconstitutional practice of backdoor searches. While that did not stop the FISC from approving backdoor searches under certain circumstances, it did lead other courts to uphold surveillance conducted under Section 702 and ignore whether these searches are constitutional.

Just as with “about” collection, the latest Section 702 reauthorization acknowledges backdoor searches for the first time. It imposes a warrant requirement only in very narrow circumstances: where the FBI runs a search in a “predicated criminal investigation” not connected to national security. Under FBI practice, a predicated investigation is a formal, advanced case. By all accounts, though, backdoor searches are normally used far earlier. In other words, the new warrant requirement will rarely, if ever, apply. It is unlikely to prevent a fishing expedition through Americans’ private communications. Even where a search is inspired by a tip about a serious domestic crime [.pdf], the FBI should not have warrantless access to a vast trove of intimate communications that would otherwise require complying with stringent warrant procedures.

But following the latest reauthorization, the government will probably argue that Congress gave its OK to the FBI searching sensitive data obtained through NSA spying under Section 702, and using it in criminal cases against Americans.

In sum, the latest reauthorization of Section 702 is best seen as an expansion of the government’s spying powers, and not just an extension of the number of years that the government may exercise these powers. Either way, the latest reauthorization is a massive disappointment. That’s why we’ve pledged to redouble our commitment to seek surveillance reform wherever we can: through the courts, through the development and spread of technology that protects our privacy and security, and through Congressional oversight.

February 1, 2018 at 07:33PM
via Deeplinks http://ift.tt/2Ev1agQ

UK Court Delivers Blow to Mass Surveillance State, Win for Privacy Advocates

UK Court Delivers Blow to Mass Surveillance State, Win for Privacy Advocates
By Josiah Wilmoth

A UK court has dealt a winning hand to privacy advocates — and a blow to the mass surveillance state — through its ruling on the controversial Data Retention and Investigatory Powers Act (DRIPA).

The ruling, which was issued by three appellate judges, said that DRIPA was “inconsistent with EU law” because it failed to safeguard citizens’ phone records and internet browsing history from unauthorized access by police officers, according to a report in The Guardian.

DRIPA had been passed as “emergency legislation” in 2014 after just a single day of parliamentary debate, and it laid the foundation for its eventual replacement, the 2016 Investigatory Powers Act.

The Snooper’s Charter tried to institute a mass surveillance state

Nicknamed the “snooper’s charter,” the Investigatory Powers Act greatly expanded the government’s ability to spy on its citizens without a warrant, even for purposes other than to solve crimes. What caused much. public outrage was the fact that internet records would need to be stored, and made available to a whole host of government agencies – including many non-law enforcement arms. NSA whistleblower Edward Snowden indicted it as “the most extreme surveillance in the history of western democracy.”

Human rights group Liberty argued the case against DRIPA on behalf of Labour MP Tom Watson, who said that the ruling will force the government to curtail the scope of the Investigatory Powers Act.

“The government must now bring forward changes to the Investigatory Powers Act to ensure that hundreds of thousands of people, many of whom are innocent victims or witnesses to crime, are protected by a system of independent approval for access to communications data,” he said.

The government, meanwhile, attempted to downplay the ruling as inconsequential since DRIPA is no longer in force. Security minister Ben Wallace defended the mass surveillance regime by arguing that mass surveillance was necessary to prevent terrorism and catch child predators.

“It is often the only way to identify paedophiles involved in online child abuse as it can be used to find where and when these horrendous crimes have taken place,” he said.

However, Martha Spurrier, the director of Liberty, said that the ruling was “crystal clear” in its indictment of the mass surveillance regime.

“Yet again a UK court has ruled the government’s extreme mass surveillance regime unlawful. This judgement tells ministers in crystal clear terms that they are breaching the public’s human rights,” she said.

“No politician is above the law,” Spurrier concluded. “When will the government stop bartering with judges and start drawing up a surveillance law that upholds our democratic freedoms?”

The post UK Court Delivers Blow to Mass Surveillance State, Win for Privacy Advocates appeared first on Privacy Online News.

https://platform.twitter.com/widgets.js

February 1, 2018 at 09:15AM
via Privacy Online News http://ift.tt/2BIQNTD

Analog Equivalent Privacy Rights (12/21): Our parents bought things untracked, their footsteps in store weren’t recorded

Analog Equivalent Privacy Rights (12/21): Our parents bought things untracked, their footsteps in store weren’t recorded
By Rick Falkvinge

In the last article, we focused on how people are tracked today when using credit cards instead of cash. But few pay attention to the fact that we’re tracked when using cash today, too.

Few people pay attention to the little sign on the revolving door on Schiphol Airport in Amsterdam, Netherlands. It says that wi-fi and bluetooth tracking of every single individual is taking place in the airport.

What sets Schiphol Airport apart isn’t that they track individual people’s movements to the sub-footstep level in a commercial area. (It’s for commercial purposes, not security purposes.) No, what sets Schiphol apart is that they bother to tell people about it. (The Netherlands tend to take privacy seriously, as does Germany, and for the same reason.)

Locator beacons are practically a standard in bigger commercial areas now. They ping your phone using wi-fi and bluetooth, and using signal strength triangulation, a grid of locator beacons is able to show how every single individual is moving in realtime at the sub-footstep level. This is used to “optimize marketing” — in other words, find ways to trick people’s brains to spend resources they otherwise wouldn’t have. Our own loss of privacy is being turned against us, as it always is.

Where do people stop for a while, what catches their attention, what doesn’t catch their attention, what’s a roadblock for more sales?

These are legitimate questions. However, taking away people’s privacy in order to answer those questions is not a legitimate method to answer them.

This kind of mass individual tracking has even been deployed at city levels, which happened in complete silence until the Privacy Oversight Board of a remote government sounded the alarms. The city of Västerås got the green light to continue tracking once some formal criteria were met.

Yes, this kind of people tracking is documented to have been already rolled out citywide in at least one small city in a remote part of the world (Västerås, Sweden). With the government’s Privacy Oversight Board having shrugged and said “fine, whatever”, don’t expect this to stay in the small town of Västerås. Correction, wrong tense: don’t expect it to have stayed in just Västerås, where it was greenlit three years ago.

Our analog parents had the ability to walk around untracked in the city and street of their choice, without it being used or held against them. It’s not unreasonable that our digital children should have the same ability.

There’s one other way to buy things with cash which avoids this kind of tracking, and that’s paying cash-on-delivery when ordering something online or over the phone to your door — in which case your purchase is also logged and recorded, just in another type of system.

This isn’t only used against the ordinary citizen for marketing purposes, of course. It’s used against the ordinary citizen for every conceivable purpose. But we’ll be returning to that in a later article in the series.

Privacy remains your own responsibility.

The post Analog Equivalent Privacy Rights (12/21): Our parents bought things untracked, their footsteps in store weren’t recorded appeared first on Privacy Online News.

January 19, 2018 at 05:00PM
via Privacy Online News http://ift.tt/2mU3GoH

Nation-State Hacking: 2017 in Review

Nation-State Hacking: 2017 in Review
By Eva Galperin

If 2016 was the year government hacking went mainstream, 2017 is the year government hacking played the Super Bowl halftime show. It’s not Fancy Bear and Cozy Bear making headlines. This week, the Trump administration publicly attributed the WannaCry ransomware attack to the Lazarus Group, which allegedly works on behalf of the North Korean government. As a Presidential candidate, Donald Trump famously dismissed allegations that the Russian government broke into email accounts belonging to John Podesta and the Democratic National Committee, saying it could easily have been the work of a “400 lb hacker” or China. The public calling-out of North Korean hacking appears to signal a very different attitude towards attribution.

Lazarus Group may be hot right now, but Russian hacking has continued to make headlines. Shortly after the release of WannaCry, there came another wave of ransomware infections, Petya/NotPetya (or, this author’s favorite name for the ransomware, “NyetYa”). Petya was hidden inside of a legitimate update to accounting software made by MeDoc, a Ukrainian company. For this reason and others, Petya was widely attributed to Russian actors and is thought to have primarily targeted Ukrainian companies, where MeDoc is commonly used. The use of ransomware as a wiper, a tool whose purpose is to render the computer unusable rather than to extort money from its owner, appears to be one of this year’s big new innovations in the nation-state actors’ playbook.

WannaCry and Petya both owe their effectiveness to a Microsoft Windows security vulnerability that had been found by the NSA and code named EternalBlue, which was stolen and released by a group calling themselves the Shadow Brokers. US agencies losing control of their hacking tools has been a recurring theme in 2017.  First companies, hospitals, and government agencies find themselves targeted by re-purposed NSA exploits that we all rushed to patch, then Wikileaks published Vault 7, a collection of CIA hacking tools that had been leaked to them, following it up with the publication of source code for tools in Vault 8. 

This year also saw developments from perennial bad actor Ethiopia. In December, Citizen Lab published a report documenting the Ethiopian government’s ongoing efforts to spy on journalists and dissidents, this time with the help of software provided by Cyberbit, an Israeli company. The report also tracked Cyberbit as their salespeople demonstrated their surveillance product to governments including France, Vietnam, Kazakhstan, Rwanda, Serbia, and Nigeria. Other perennial bad actors also made a splash this year, including Vietnam, whose government was linked to Ocean Lotus, or APT 32 in a report from FireEye. The earliest known samples from this actor were found by EFF in 2014, when they were used to target our activists and researchers.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you’re reading? Support digital freedom defense today!

donate to EFF

December 27, 2017 at 06:54PM
via Deeplinks http://ift.tt/2l7urpv

The Year the Open Internet Came Under Siege: 2017 Year in Review

The Year the Open Internet Came Under Siege: 2017 Year in Review
By Ernesto Falcon

The fight between the Federal Communications Commission’s choice to abandon the principles of net neutrality and the majority of Americans started early in 2017 and continued into the very last month of the year. But even with the FCC’s bad vote coming so late, we fought all year to build up momentum that will allow us to fix their blunder in 2018.

2017 started out with a warning: in his final address as chairman of the FCC, Tom Wheeler said that the future of a free and open Internet safeguarded by net neutrality was hanging by a thread. “All the press reports seem to indicate that the new commission will choose an ideologically based course,” said Wheeler. Wheeler also offered up the argument that “Network investment is up, investment in innovative services is up, and ISPs’ revenues—and stock prices—are at record levels. So, where’s the fire? Other than the desires of a few [providers] to be free of meaningful oversight, why the sudden rush to undo something that is demonstrably working?”

That would be a constant question posed throughout 2017: why would the FCC, under its new chairman, former Verizon lawyer Ajit Pai, move to eliminate something as functional and popular as net neutrality? After all, net neutrality protections guarantee that all information transmitted over the Internet be treated equally, preventing Internet service providers from prioritizing, say, their own content over that of competitors. It’s a logical set of rules that preserves the Internet as we know it. Net neutrality has been protected by the FCC for over a decade, culminating in the 2015 Open Internet Order, which we worked hard to get adopted in the first place.

As early as February, there were signs that the FCC was going to abandon its role guarding against data discrimination by ISPs. Early in the month, the FCC indicated it would cease investigating AT&T’s zero-rating practices. “Zero-rating” is when a company doesn’t count certain content against a user’s data limit. While zero-rating may sound good in theory, in reality it’s just your provider picking winners and losers and trying to influence how you use your data. AT&T was zero-rating content from DirecTV, which it owns. And, prior to Pai’s chairmanship, the FCC wanted to know if AT&T was treating all video service the same, in accordance with the principles of net neutrality. As Chairman, Pai abandoned the investigation.

The argument consistently put forward by opponents of net neutrality is that it imposes onerous rules on ISPs that stifle innovation and competition in the marketplace. The innovation claim is undermined by the many start-ups that lined up to defend net neutrality, telling the FCC that creativity depends on clear, workable rules. The competition claim is just as laughable, given that it is the large broadband companies that wanted net neutrality gutted—the same companies that are often the only option customers have. Net neutrality protections that forced monopolist ISPs to treat all data the same were some of the only competitive safeguards we had. Without them, Time Warner’s alleged practices of misleading customers and Internet content providers would lose the tempering effect the Open Internet Order provided.

On April 26, the fear and rumor became reality as the FCC chairman announced his intention to roll back the Open Internet Order and “reclassify” broadband Internet access so that ISPs would be allowed to block content and selectively throttle speeds, which was previously prohibited. We knew this was unpopular and would have a devastating effect on speech and the Internet, so we gave you a tool to tell that to the FCC. We knew that the vast majority of you support net neutrality, and we worked hard to make sure your voices were heard.

The new plan proposed by Pai claimed to make ISPs answerable to the Federal Trade Commission (FTC) instead of the FCC – even though a pending court case might keep the FTC from having any oversight of major telecommunications companies altogether. Even if it retains some authority, the FTC can only get involved when ISPs break the promises they chose to make—a flimsy constraint that telecom lawyers can easily write around. Sure enough, just as the FCC carried out Pai’s repeal, we saw Comcast roll back its promises on net neutrality. And that was just the start of the problems we have with Pai’s proposal. An attack on the open Internet is an attack on free speech, and that’s worth defending.

In June, we and a coalition of hundreds of other groups that included nonprofits, artists, tech companies large and small, libraries, and even some ISPs called for a day of action in support of net neutrality. That day came on July 12, when EFF and other websites “blocked” access to their websites unless visitors “upgraded” to “premium” Internet service, a parody of the real consequences that would follow the repeal of net neutrality. Our day of action resulted in 1.6 million comments sent to the FCC.

We kept busy in July, submitting our own comment to the FCC in strong opposition to the proposed repeal. Removing net neutrality protections would, we explained, open the door to blocking websites, selectively throttling Internet speeds for some content, and charging fees to access favored content over “fast lanes.” Our comment joined that of nearly 200 computer scientists, Internet engineers, and other technical luminaries who pointed out that the FCC’s plan was premised on a number of misconceptions about how the Internet actually works and how it is used.

Even with the comments from the engineers, the final version of the plan released by the FCC still contained incorrect information about how the Internet works. It became clear the FCC was forging ahead with a repeal, without stating a valid reason for doing so or listening to the voices of the public that were pouring in. With that in mind, we created a tool that makes it easy to tell Congress to protect the web and created a guide for other ways to get involved.

On December 14, the FCC voted 3-2 to roll back net neutrality and abdicate its responsibility to ensure a free and open Internet. That vote is not the end of the story, not by far. The new rule is being met with legal challenges from all sides, from public interest groups to state attorneys general to technology companies. Meanwhile, state governments have started introducing laws to protect net neutrality on a local level. Even as lawsuits begin, Congress can stop the FCC nightmare from going forward. Under the Congressional Review Act (CRA), Congress has a window of time to reverse an agency rule. This means that we, and you, must continue to monitor and pressure Congress to do so. So call Congress and urge them to use their power under the CRA to save the Open Internet Order.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you’re reading? Support digital freedom defense today!

donate to EFF

December 23, 2017 at 04:35PM
via Deeplinks http://ift.tt/2l3Lp7d

DRM’s Dead Canary: How We Just Lost the Web, What We Learned from It, and What We Need to Do Next

DRM’s Dead Canary: How We Just Lost the Web, What We Learned from It, and What We Need to Do Next
By Cory Doctorow

EFF has been fighting against DRM and the laws behind it for a decade and a half, intervening in the US Broadcast Flag, the UN Broadcasting Treaty, the European DVB CPCM standard, the W3C EME standard and many other skirmishes, battles and even wars over the years. With that long history behind us, there are two things we want you to know about DRM:

  1. Everybody on the inside secretly knows that DRM technology is irrelevant, but DRM law is everything; and
  2. The reason companies want DRM has nothing to do with copyright.

These two points have just been demonstrated in a messy, drawn-out fight over the standardization of DRM in browsers, and since we threw a lot of blood and treasure at that fight, one thing we hope to salvage is an object lesson that will drive these two points home and provide a roadmap for the future of DRM fighting.

DRM IS TECHNOLOGICALLY BANKRUPT; DRM LAW IS DEADLY

Here’s how DRM works, at a high level: a company wants to provide a customer (you) with digital asset (like a movie, a book, a song, a video game or an app), but they want to control what you do with that file after you get it.

So they encrypt the file. We love encryption. Encryption works. With relatively little effort, anyone can scramble a file so well that no one will ever be able to decrypt it unless they’re provided with the key.

Let’s say this is Netflix. They send you a movie that’s been scrambled and they want to be sure you can’t save it and watch it later from your hard-drive. But they also need to give you a way to view the movie, too. At some point, that means unscrambling the movie. And there’s only one way to unscramble a file that’s been competently encrypted: you have to use the key.

So Netflix also gives you the unscrambling key.

But if you have the key, you can just unscramble the Netflix movies and save them to your hard drive. How can Netflix give you the key but control how you use it?

Netflix has to hide the key, somewhere on your computer, like in a browser extension or an app. This is where the technological bankruptcy comes in. Hiding something well is hard. Hiding something well in a piece of equipment that you give to your adversary to take away with them and do anything they want with is impossible.

Maybe you can’t find the keys that Netflix hid in your browser. But someone can: a bored grad student with a free weekend, a self-taught genius decapping a chip in their basement, a competitor with a full-service lab. One tiny flaw in any part of the fragile wrapping around these keys, and they’re free.

And once that flaw is exposed, anyone can write an app or a browser plugin that does have a save button. It’s game over for the DRM technology. (The keys escape pretty regularly, just as fast as they can be revoked by the DRM companies.)

DRM gets made over the course of years, by skilled engineers, at a cost of millions of dollars. It gets broken in days, by teenagers, with hobbyist equipment. That’s not because the DRM-makers are stupid, it’s because they’re doing something stupid.

Which is where the law comes in. DRM law gives rightsholders more forceful, far-ranging legal powers than laws governing any other kind of technology. In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), whose Section 1201 provides for felony liability for anyone commercially engaged in bypassing a DRM system: 5 years in prison and a $500,000 fine for a first offense. Even noncommercial bypass of DRM is subject to liability. It also makes it legally risky to even talk about how to bypass a DRM system.

So the law shores up DRM systems with a broad range of threats. If Netflix designs a video player that won’t save a video unless you break some DRM, they now have the right to sue — or sic the police — on any rival that rolls out an improved alternative streaming client, or a video-recorder that works with Netflix. Such tools wouldn’t violate copyright law any more than a VCR or a Tivo does, but because that recorder would have to break Netflix DRM, they could use DRM law to crush it.

DRM law goes beyond mere bans on tampering with DRM. Companies also use Section 1201 of the DMCA to threaten security researchers who discover flaws in their products. The law becomes a weapon they can aim at anyone who wants to warn their customers (still you) that the products you’re relying on aren’t fit for use. That includes warning people about flaws in DRM that expose them to being hacked.

It’s not just the USA and not just the DMCA, either. The US Trade Representative has “convinced” countries around the world to adopt a version of this rule.

DRM HAS NOTHING TO DO WITH COPYRIGHT

DRM law has the power to do untold harm. Because it affords corporations the power to control the use of their products after sale, the power to decide who can compete with them and under what circumstances, and even who gets to warn people about defective products, DRM laws represent a powerful temptation.

Some things that aren’t copyright infringement: buying a DVD while you’re on holiday and playing it when you get home. It is obviously not a copyright infringement to go into a store in (say) New Delhi and buy a DVD and bring it home to (say) Topeka. The rightsholder made their movie, sold it to the retailer, and you paid the retailer the asking price. This is the opposite of copyright infringement. That’s paying for works on the terms set by the rightsholder. But because DRM stops you from playing out-of-region discs on your home player, the studios can invoke copyright law to decide where you can consume the copyrighted works you’ve bought, fair and square.

Other not-infringements: fixing your car (GM uses DRM to control who can diagnose an engine, and to force mechanics to spend tens of thousands of dollars for diagnostic information they could otherwise determine themselves or obtain from third parties); refilling an ink cartridge (HP pushed out a fake security update that added DRM to millions of inkjet printers so that they’d refuse remanufactured or third-party cartridges), or toasting home-made bread (though this hasn’t happened yet, there’s no reason that a company couldn’t put DRM in its toasters to control whose bread you can use).

It’s also not a copyright infringement to watch Netflix in a browser that Netflix hasn’t approved. It’s not a copyright infringement to record a Netflix movie to watch later. It’s not a copyright infringement to feed a Netflix video to an algorithm that can warn you about upcoming strobe effects that can trigger life-threatening seizures in people with photosensitive epilepsy.

WHICH BRINGS US TO THE W3C

The W3C is the world’s foremost open web standards body, a consortium whose members (companies, universities, government agencies, civil society groups and others) engage in protracted wrangles over the best way for everyone to deliver web content. They produce “recommendations” (W3C-speak for “standards”) that form the invisible struts that hold up the web. These agreements, produced through patient negotiation and compromise, represent an agreement by major stakeholders about the best (or least-worst) way to solve thorny technological problems.

In 2013, Netflix and a few other media companies convinced the W3C to start work on a DRM system for the web. This DRM system, Encrypted Media Extensions (EME), represented a sharp departure from the W3C’s normal business. First, EME would not be a complete standard: the organization would specify an API through which publishers and browser vendors would make DRM work, but the actual “content decryption module” (CDM) wouldn’t be defined by the standard. That means that EME was a standard in name only: if you started a browser company and followed all the W3C’s recommendations, you still wouldn’t be able to play back a Netflix video. For that, you’d need Netflix’s permission.

It’s hard to overstate how weird this is. Web standards are about “permissionless interoperability.” The standards for formatting text mean that anyone can make a tool that can show you pages from the New York Times‘ website; images from Getty; or interactive charts on Bloomberg. The companies can still decide who can see which pages on their websites (by deciding who gets a password and which parts of the website each password unlocks), but they don’t get to decide who can make the web browsing program you type the password into in order to access the website.

A web in which every publisher gets to pick and choose which browsers you can use to visit their sites is a very different one from the historical web. Historically, anyone could make a new browser by making sure it adhered to W3C recommendations, and then start to compete. And while the web has always been dominated by a few browsers, which browsers dominate have changed every decade or so, as new companies and even nonprofits like Mozilla (who make Firefox) overthrew the old order. Technologies that have stood in the way of this permissionless interoperabilty — for instance, patent-encumbered video — have been seen as impediments to the idea of the open web, not standardization opportunities.

When the W3C starts making technologies that only work when they’re blessed by a handful of entertainment companies, they’re putting their thumbs — their fists — on the scales in favor of ensuring that the current browser giants get to enjoy a permanent reign.

But that’s the least of it. Until EME, W3C standards were designed to give the users of the web (e.g. you) more control over what your computer did while you were accessing other peoples’ websites. With EME — and for the first time ever — the W3C is designing technology that takes away your control. EME is designed to allow Netflix — and other big companies — to decide what your browser does, even (especially) when you disagree about what that should be.

Since the earliest days of computing, there’s been a simmering debate about whether computers exist to control their users, or vice versa (as the visionary computer scientist and education specialist Seymour Papert put it, “children should be programming the computer rather than being programmed by it” — that applies equally well to adults. Every W3C standard until 2017 was on the side of people controlling computers. EME breaks with that. It is a subtle, but profound shift.

WHY WOULD THE W3C DO THIS?

Ay yi yi. That is the three billion user question.

The W3C version of the story goes something like this. The rise of apps has weakened the web. In the pre-app days, the web was the only game in town, so companies had to play by web rules: open standards, open web. But now that apps exist and nearly everyone uses them, big companies can boycott the web, forcing their users into apps instead. That just accelerates the rise of apps, and weakens the web even more. Apps are used to implement DRM, so DRM-using companies are moving to apps. To keep entertainment companies from killing the web outright, the Web must have DRM too.

Even if those companies don’t abandon the web altogether, continues this argument, getting them to make their DRM at the W3C is better than letting them make it on an ad-hoc basis. Left to their own devices, they could make DRM that made no accommodations for people with disabilities, and without the W3C’s moderating influence, these companies would make DRM that would be hugely invasive of web users’ privacy.

The argument ends with a broad justification for DRM: companies have the right to protect their copyrights. We can’t expect an organization to spend fortunes creating or licensing movies and then distribute them in a way that lets anyone copy and share them.

We think that these arguments don’t hold much water. The web does indeed lack some of its earlier only-game-in-town muscle, but the reality is that companies make money by going where their customers are, and every potential customer has a browser, while only existing customers have a company’s apps. The more hoops a person has to jump through in order to become your customer, the fewer customers you’ll have. Netflix is in a hyper-competitive market with tons of new entrants (e.g. Disney), and being “that streaming service you can’t use on the web” is a serious deficit.

We also think that the media companies and tech companies would struggle to arrive at a standard for DRM outside of the W3C, even a really terrible one. We’ve spent a lot of time in the smoke-filled rooms of DRM standardization and the core dynamic there is the media companies demanding full-on lockdown for every frame of video, and tech companies insisting that the best anyone can hope for is an ineffectual “speed-bump” that they hope will mollify the media companies. Often as not, these negotiations collapse under their own weight.

Then there’s the matter of patents: companies that think DRM is a good idea also love software patents, and the result is an impenetrable thicket of patents that make getting anything done next to impossible. The W3C’s patent-pooling mechanism (which is uniquely comprehensive in the standards world and stands as an example of the best way to do this sort of thing) was essential to making DRM standardization possible. What’s more, there are key players in the DRM world, like Adobe, who hold significant patent portfolios but are playing an ever-dwindling role in the world of DRM (the avowed goal of EME was to “kill Flash”). If the companies involved had to all sit down and negotiate a new patent deal without the W3C’s framework, any of these companies could “turn troll” and insist that all the rest would have to shell out big dollars to license their patents — they have nothing to lose by threatening the entire enterprise, and everything to gain from even a minuscule per-user royalty for something that will be rolled out into three billion browsers.

Finally, there’s no indication that EME had anything to do with protecting legitimate business interests. Streaming video services like Netflix rely on customers to subscribe to a whole library with constantly added new materials and a recommendation engine to help them navigate the catalog.

DRM for streaming video is all about preventing competition, not protecting copyrights. The purpose of DRM is to give companies the legal tools to prevent activities that would otherwise be allowed. The DRM part doesn’t have to “work” (in the sense of preventing copyright infringement) so long as it allows for the invocation of the DMCA.

To see how true this is, just look at Widevine, Google’s version of EME. Google bought the company that made Widevine in 2010, but it wasn’t until 2016 that an independent researcher actually took a close look at how well it prevented videos from leaking. That researcher, David Livshits found that Widevine was trivial to circumvent, and it had been since its inception, and that the errors that made Widevine so ineffective were obvious to even a cursory examination. If the millions of dollars and the high-power personnel committed to EME were allocated to create a technology that would effectively prevent copyright infringement, then you’d think that Netflix or one of the other media companies in the negotiations would have diverted some of those resources to a quick audit to make sure that the stuff actually worked as advertised.

(Funny story: Livshits is an Israeli at Ben Gurion University, and Israel happens to be the rare country that doesn’t ban breaking DRM, meaning that Israelis are among the only people who can do this kind of research without fear of legal retaliation)

But the biggest proof that EME was just a means to shut down legitimate competitors — and not an effort to protect copyright — is what happened next.

A CONTROLLED EXPERIMENT

When EFF joined the W3C, our opening bid was “Don’t make DRM.”

We put the case to the organization, describing the way that DRM interferes with the important copyright exceptions (like those that allow people to record and remix copyrighted works for critical or transformative purposes) and the myriad problems presented by the DMCA and laws like it around the world.

The executive team of the W3C basically dismissed all arguments about fair use and user rights in copyright as a kind of unfortunate casualty of the need to keep Netflix from ditching the web in favor of apps, and as for the DMCA, they said that they couldn’t do anything about this crazy law, but they were sure that the W3C’s members were not interested in abusing the DMCA, they just wanted to keep their high-value movies from being shared on the internet.

So we changed tack, and proposed a kind of “controlled experiment” to find out what the DRM fans at the W3C were trying to accomplish.

The W3C is a consensus body: it makes standards by getting everyone in a room to compromise, moving toward a position that everyone can live with. Our ideal world was “No DRM at the W3C,” and DRM is a bad enough idea that it was hard to imagine much of a compromise from there.

But after listening closely to the DRM side’s disavowals of DMCA abuse, we thought we could find something that would represent an improvement on the current status quo and that should fit with their stated views.

We proposed a kind of DRM non-aggression pact, through which W3C members would promise that they’d only sue people under laws like DMCA 1201 if there was some other law that had been broken. So if someone violates your copyright, or incites someone to violate your copyright, or interferes with your contracts with your users, or misappropriates your trade secrets, or counterfeits your trademarks, or does anything else that violates your legal rights, you can throw the book at them.

But if someone goes around your DRM and doesn’t violate any other laws, the non-aggression pact means that you couldn’t use the W3C standardised DRM as a route to legally shut them down. That would protect security researchers, it would protect people analyzing video to add subtitles and other assistive features, it would protect archivists who had the legal right to make copies, and it would protect people making new browsers.

If all you care about is making an effective technology that prevents lawbreaking, this agreement should be a no-brainer. For starters, if you think DRM is an effective technology, it shouldn’t matter if it’s illegal to criticize it.

And since the nonaggression pact kept all other legal rights intact, there was no risk that agreeing to it would allow someone to break the law with impunity. Anyone who violated copyrights (or any other rights) would be square in the DMCA’s crosshairs, and companies would have their finger on the trigger.

NOT SURPRISED BUT STILL DISAPPOINTED

Of course, they hated this idea.

The studios, the DRM vendors and the large corporate members of the W3C participated in a desultory, brief “negotiation” before voting to terminate further discussion and press on. The W3C executive helped them dodge discussions, chartering further work on EME without any parallel work on protecting the open web, even as opposition within the W3C mounted.

By the time the dust settled, EME was published after the most divided votes the W3C had ever seen, with the W3C executive unilaterally declaring that issues for security research, accessibility, archiving and innovation had been dealt with as much as they could be (despite the fact that literally nothing binding was done about any of these things). The “consensus” process of the W3C has so thoroughly hijacked that EME’s publication was only supported by 58% of the members who voted in the final poll, and many of those members expressed regret that they were cornered into voting for something they objected to.

When the W3C executive declared that any protections for the open web were incompatible with the desires of the DRM-boosters, it was a kind of ironic vindication. After all, this is where we’d started, with EFF insisting that DRM wasn’t compatible with security disclosures, with accessibility, with archiving or innovation. Now, it seemed, everyone agreed.

What’s more, they all implicitly agreed that DRM wasn’t about protecting copyright. It was about using copyright to seize other rights, like the right to decide who could criticize your product — or compete with it.

DRM’s sham cryptography means that it only works if you’re not allowed to know about its defects. This proposition was conclusively proved when a W3C member proposed that the Consortium should protect disclosures that affected EME’s “privacy sandbox” and opened users to invasive spying, and within minutes, Netflix’s representative said that even this was not worth considering.

In a twisted way, Netflix was right. DRM is so fragile, so incoherent, that it is simply incompatible with the norms of the marketplace and science, in which anyone is free to describe their truthful discoveries, even if they frustrate a giant company’s commercial aspirations.

The W3C tacitly admitted this when they tried to convene a discussion group to come up with some nonbinding guidelines for when EME-using companies should use the power of DRM law to punish their critics and when they should permit the criticism.

“RESPONSIBLE DISCLOSURE” ON OUR TERMS, OR JAIL

They called this “responsible disclosure,” but it was far from the kinds of “responsible disclosure” we see today. In current practice, companies offer security researchers enticements to disclose their discoveries to vendors before going public. These enticements range from bug-bounty programs that pay out cash, to leaderboards that provide glory to the best researchers, to binding promises to act on disclosures in a timely way, rather than crossing their fingers, sitting on the newly discovered defects, and hoping no one else re-discovers them and exploits them.

The tension between independent security researchers and corporations is as old as computing itself. Computers are hard to secure, thanks to their complexity. Perfection is elusive. Keeping the users of networked computers safe requires constant evaluation and disclosure, so that vendors can fix their bugs and users can make informed decisions about which systems are safe enough to use.

But companies aren’t always the best stewards of bad news about their own products. As researchers have discovered — the hard way — telling a company about its mistakes may be the polite thing to do, but it’s very risky behavior, apt to get you threatened with legal reprisals if you go public. Many’s the researcher who told a company about a bug, only to have the company sit on that news for an intolerably long time, putting its users at risk. Often, these bugs only come to light when they are independently discovered by bad actors, who figure out how to exploit them, turning them into attacks that compromise millions of users, so many that the bug’s existence can no longer be swept under the rug.

As the research world grew more gunshy about talking to companies, companies were forced to make real, binding assurances that they would honor the researchers’ discoveries by taking swift action in a defined period, by promising not to threaten researchers over presenting their findings, and even by bidding for researchers’ trust with cash bounties. Over the years, the situation has improved, with most big companies offering some kind of disclosure program.

But the reason companies offer those bounties and assurances is that they have no choice. Telling the truth about defective products is not illegal, so researchers who discover those truths are under no obligation to play by companies’ rules. That forces companies to demonstrate their goodwill with good conduct, binding promises and pot-sweeteners.

Companies definitely want to be able to decide who can tell the truth about their products and when. We know that because when they get the chance to flex that muscle, they flex it. We know it because they said so at the W3C. We know it because they demanded that they get that right as part of the DRM package in EME.

Of all the lows in the W3C DRM process, the most shocking was when the historic defenders of the open web tried to turn an effort to protect the rights of researchers to warn billions of people about harmful defects in their browsers into an effort to advise companies on when they should hold off on exercising that right — a right they wouldn’t have without the W3C making DRM for the web.

DRM IS THE OPPOSITE OF SECURITY

From the first days of the DRM fight at the W3C, we understood that the DRM vendors and the media companies they supplied weren’t there to protect copyright, they were there to grab legally enforceable non-copyright privileges. We also knew that DRM was incompatible with security research: because DRM relies on obfuscation, anyone who documents how DRM works also makes it stop working.

This is especially clear in terms of what wasn’t said at the W3C: when we proposed that people should be able to break DRM to generate subtitles or conduct security audits, the arguments were always about whether that was acceptable, but it was never about whether it was possible.

Recall that EME is supposed to be a system that helps companies ensure that their movies aren’t saved to their users’ hard-drives and shared around the internet. For this to work, it should be, you know, hard to do that.

But in every discussion of when people should be allowed to break EME, it was always a given that anyone who wanted to could do so. After all, when you hide secrets in software you give to people who you want to keep them secret from, you are probably going to be disappointed.

From day one, we understood that we would arrive at a point in which the DRM advocates at the W3C would be obliged to admit that the survival of their plan relied on being able to silence people who examined their products.

However, we did hold out hope that when this became clear to everyone, that they would understand that DRM couldn’t peacefully co-exist with the open web.

We were wrong.

THE W3C IS THE CANARY IN THE COALMINE

The success of DRM at the W3C is a parable about market concentration and the precarity of the open web. Hundreds of security researchers lobbied the W3C to protect their work, UNESCO publicly condemned the extension of DRM to the web, and the many crypto-currency members of the W3C warned that using browsers for secure, high-stakes applications like moving around peoples’ life-savings could only happen if browsers were subjected to the same security investigations as every other technology in our life (except DRM technologies).

There is no shortage of businesses that want to be able to control what their customers and competitors do with their products. When the US Copyright Office held hearings on DRM in 2015, they heard about DRM in medical implants and cars, farm equipment and voting machines. Companies have discovered that adding DRM to their products is the most robust way to control the marketplace, a cheap and reliable way to convert commercial preferences about who can repair, improve, and supply their products into legally enforceable rights.

The marketplace harms from this anti-competitive behavior are easy to see. For example, the aggressive use of DRM to prevent independent repair shops ends up diverting tons of e-waste to landfill or recycling, at the cost of local economies and the ability of people to get full use out of your property. A phone that you recycle instead of repairing is a phone you have to pay to replace — and repair creates many more jobs than recycling (recycling a ton of e-waste creates 15 jobs; repairing it creates 150 jobs). Repair jobs are local, entrepreneurial jobs, because you don’t need a lot of capital to start a repair shop, and your customers want to bring their gadgets to someone local for service (no one wants to send a phone to China for repairs — let alone a car!).

But those economic harms are only the tip of the iceberg. Laws like DMCA 1201 incentivize DRM by promising the power to control competition, but DRM’s worst harms are in the realm of security. When the W3C published EME, it bequeathed to the web an unauditable attack-surface in browsers used by billions of people for their most sensitive and risky applications. These browsers are also the control panels for the Internet of Things: the sensor-studded, actuating gadgets that can see us, hear us, and act on the physical world, with the power to boil, freeze, shock, concuss, or betray us in a thousand ways.

The gadgets themselves have DRM, intended to lock our repairs and third-party consumables, meaning that everything from your toaster to your car is becoming off-limits to scrutiny by independent researchers who can give you unvarnished, unbiased assessments of the security and reliability of these devices.

In a competitive market, you’d expect non-DRM options to proliferate in answer to this bad behavior. After all, no customer wantsDRM: no car-dealer ever sold a new GM by boasting that it was a felony for your favorite mechanic to fix it.

But we don’t live in an a competitive market. Laws like DMCA 1201 undermine the competition that might counter their worst effects.

The companies that fought DRM at the W3C — browser vendors, Netflix, tech giants, the cable industry — all trace their success to business strategies that shocked and outraged established industry when they first emerged. Cable started as unlicensed businesses that retransmitted broadcasts and charged for it. Apple’s dominance started with ripping CDs and ignoring the howls of the music industry (just as Firefox got where it is by blocking obnoxious ads and ignoring the web-publishers who lost millions as a result). Of course, Netflix’s revolutionary red envelopes were treated as a form of theft.

These businesses started as pirates and became admirals, and treat their origin stories as legends of plucky, disruptive entrepreneurs taking on a dinosauric and ossified establishment. But they treat any disruption aimed at them as an affront to the natural order of things. To paraphrase Douglas Adams, any technology invented in your adolescence is amazing and world-changing; anything invented after you turn 30 is immoral and needs to be destroyed.

LESSONS FROM THE W3C

Most people don’t understand the risks of DRM. The topic is weird, technical, esoteric and take too long to explain. The pro-DRM side wants to make the debate about piracy and counterfeiting, and those are easy stories to tell.

But people who want DRM don’t really care about that stuff, and we can prove it: just ask them if they’d be willing to promise not to use the DMCA unless someone is violating copyright, and watch them squirm and weasel about why policing copyright involves shutting down competitive activities that don’t violate copyright. Point out that they didn’t even question whether someone could break their DRM, because, of course, DRM is so technologically incoherent that it only works if it’s against the law to understand how it works, and it can be defeated just by looking closely at it.

Ask them to promise not to invoke the DMCA against people who have discovered defects in their products and listen to them defend the idea that companies should get a veto over publication of true facts about their mistakes and demerits.

These inconvenient framings at least establish what we’re fighting about, dispensing with the disingenuous arguments about copyright and moving on to the real issues: competition, accessibility, security.

This won’t win the fight on its own. These are still wonky and nuanced ideas.

One thing we’ve learned from 15-plus years fighting DRM: it’s easier to get people to take notice of procedural issues than substantive ones. We labored in vain to get people to take notice of the Broadcasting Treaty, a bafflingly complex and horribly overreaching treaty from WIPO, a UN specialized agency. No one cared until someone started stealing piles of our handouts and hiding them in the toilets so no one could read them. That was global news: it’s hard to figure out what something like the Broadcast Treaty is about, but it’s easy to call shenanigans when someone tries to hide your literature in the toilet so delegates don’t see the opposing view.

So it was that four years of beating the drum about DRM at the W3C barely broke the surface, but when we resigned from the W3C over the final vote, everyone sat up and took notice, asking how they could help fix things. The short answer is, “It’s too late: we resigned because we had run out of options.

But the long answer is a little more hopeful. EFF is suing the US government to overturn Section 1201 of the DMCA. As we proved at the W3C, there is no appetite for making DRM unless there’s a law like DMCA 1201 in the mix. DRM on its own does nothing except provide an opportunity for competitors to kick butt with innovative offerings that cost less and do more.

The Copyright Office is about to hold fresh hearings about DMCA 1201.

The W3C fight proved that we could shift the debate to the real issues. The incentives that led to the W3C being colonized by DRM are still in play and other organizations will face this threat in the years to come. We’ll continue to refine this tactic there and keep fighting, and we’ll keep reporting on how it goes so that you can help us fight. All we ask is that you keep paying attention. As we learned at the W3C, we can’t do it without you.

November 27, 2017 at 07:23PM
via Deeplinks http://ift.tt/2hWq9iH

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers
By sophia

A federal appeals court has issued an alarming ruling that significantly erodes the Constitution’s protections for anonymous speakers—and simultaneously hands law enforcement a near unlimited power to unmask them.

The Ninth Circuit’s decision in  U.S. v. Glassdoor, Inc. is a significant setback for the First Amendment. The ability to speak anonymously online without fear of being identified is essential because it allows people to express controversial or unpopular views. Strong legal protections for anonymous speakers are needed so that they are not harassed, ridiculed, or silenced merely for expressing their opinions.

In Glassdoor, the court’s ruling ensures that any grand jury subpoena seeking the identities of anonymous speakers will be valid virtually every time. The decision is a recipe for disaster precisely because it provides little to no legal protections for anonymous speakers.

EFF applauds Glassdoor for standing up for its users’ First Amendment rights in this case and for its commitment to do so moving forward. Yet we worry that without stronger legal standards—which EFF and other groups urged the Ninth Circuit to apply (read our brief filed in the case)—the government will easily compel platforms to comply with grand jury subpoenas to unmask anonymous speakers.

The Ninth Circuit Undercut Anonymous Speech by Applying the Wrong Test

The case centers on a federal grand jury in Arizona investigating allegations of fraud by a private contractor working for the Department of Veterans Affairs. The grand jury issued a subpoena to Glassdoor, which operates an online platform that allows current and former employees to comment anonymously about their employers, seeking the identities of eight accounts who posted about the contractor.

Glassdoor challenged the subpoena by asserting its users’ First Amendment rights. When the trial court ordered Glassdoor to comply, the company appealed to the U.S. Court of Appeals for the Ninth Circuit.

The Ninth Circuit ruled that because the subpoena was issued by a grand jury as part of a criminal investigation, Glassdoor had to comply absent evidence that the investigation was being conducted in bad faith.

There are several problems with the court’s ruling, but the biggest is that in adopting a “bad faith” test as the sole limit on when anonymous speakers can be unmasked by a grand jury subpoena, it relied on a U.S. Supreme Court case called Branzburg v. Hayes.

In challenging the subpoena, Glassdoor rightly argued that Branzburg was not relevant because it dealt with whether journalists had a First Amendment right to  protect the identities of their confidential sources in the face of grand jury subpoenas, and more generally, whether journalists have a First Amendment right to gather the news. This case, however, squarely deals with Glassdoor users’ First Amendment right to speak anonymously.

The Ninth Circuit ran roughshod over the issue, calling it “a distinction without a difference.” But here’s the problem: although the law is all over the map as to whether the First Amendment protects journalists’ ability to guard their sources’ identities, there is absolutely no question that the First Amendment grants anonymous speakers the right to protect their identities.

The Supreme Court has repeatedly ruled that the First Amendment protects anonymous speakers, often by emphasizing the historic importance of anonymity in our social and political discourse. For example, many of our founders spoke anonymously while debating the provisions of our Constitution.

Because the Supreme Court in Branzburg did not outright rule that reporters have a First Amendment right to protect their confidential sources, it adopted a rule that requires a reporter to respond to a grand jury subpoena for their source’s identity unless the reporter can show that the investigation is being conducted in bad faith. This is a very weak standard and difficult to prove.

By contrast, because the right to speak anonymously has been firmly established by the Supreme Court and in jurisdictions throughout the country, the tests for when parties can unmask those speakers are more robust and protective of their First Amendment rights. These tests more properly calibrate the competing interests between the government’s need to investigate crime and the First Amendment rights of anonymous speakers.

The Ninth Circuit’s reliance on Branzburg effectively eviscerates any substantive First Amendment protections for anonymous speakers by not imposing any meaningful limitation on grand jury subpoenas. Further, the court’s ruling puts the burden on anonymous speakers—or platforms like Glassdoor standing in their shoes—to show that an investigation is being conducted in bad faith before setting aside the subpoena.

The Ninth Circuit’s reliance on Branzburg is also wrong because the Supreme Court ruling in that case was narrow and limited to the situation involving reporters’ efforts to guard the identities of their confidential sources. As Justice Powell wrote in his concurrence, “I … emphasize what seems to me to be the limited nature of the Court’s ruling.” The standards in that unique case should not be transported to cases involving grand jury subpoenas to unmask anonymous speakers generally. However, that’s what the court has done—expanded Branzburg to now apply in all instances in which a grand jury subpoena targets individuals whose identities are unknown to the grand jury.

Finally, the Ninth Circuit’s use of Branzburg is further improper because there are a number of other cases and legal doctrines that more squarely address how courts should treat demands to pierce anonymity. Indeed, as we discussed in our brief, there is a whole body of law that applies robust standards to unmasking anonymous speakers, including the Ninth Circuit’s previous decision in Bursey v. U.S., which also involved a grand jury.

The Ninth Circuit Failed to Recognize the Associational Rights of Anonymous Online Speakers

The court’s decision is also troubling because it takes an extremely narrow view of the kind of anonymous associations that should be protected by the First Amendment. In dismissing claims by Glassdoor that the subpoena chilled their users’ First Amendment rights to privately associate with others, the court ruled that because Glassdoor was not itself a social or political organization such as the NAACP, the claim was “tenuous.”

There are several layers to the First Amendment right of association, including the ability of individuals to associate with others, the ability of individuals to associate with a particular organization or group, and the ability for a group or organization to maintain the anonymity of members or supporters.

Although it’s true that Glassdoor users are not joining an organization like the NAACP or a union, the court’s analysis ignores that other associational rights are implicated by the subpoena in this case. At minimum, Glassdoor’s online platform offers the potential for individuals to organize and form communities around their shared employment experiences. The First Amendment must protect those interests even if Glassdoor lacks an explicit political goal.

Moreover, even if it’s true that Glassdoor users may not have an explicitly political goal in commenting on their current or past employers, they are still associating online with others with similar experiences to speak honestly about what happens inside companies, what their professional experiences are like, and how they believe those employers can improve.

The risk of being identified as a Glassdoor user is a legitimate one that courts should recognize as analogous to the risks of civil rights groups or unions being compelled to identify their members. Disclosure in both instances chills individuals’ abilities to explore their own experiences, attitudes, and beliefs.

The Ninth Circuit Missed an Opportunity to Vindicate Online Speakers’ First Amendment Rights

Significantly absent from the court’s decision was any real discussion about the value of anonymous speech and its historical role in our country. This is a shame because the case would have been a great opportunity to show the importance of First Amendment protections for online speakers.

EFF has long fought for anonymity online because we know its importance in fostering robust expression and debate. Subpoenas such as the one issued to Glassdoor deter people from speaking anonymously about issues related to their employment. Glassdoor provides a valuable service because its anonymous reviews help inform other people’s career choices while also keeping employers accountable to their workers and potentially the general public.

The Ninth Circuit’s decision appeared unconcerned with this reality, and its “bad faith” standard places no meaningful limit on the use of grand jury subpoenas to unmask anonymous speakers. This will ultimately harm speakers who can now be more easily targeted and unmasked, particularly if they have said something controversial or offensive. 

November 15, 2017 at 02:38AM
via Deeplinks http://ift.tt/2ms7EaN