DRM’s Dead Canary: How We Just Lost the Web, What We Learned from It, and What We Need to Do Next

DRM’s Dead Canary: How We Just Lost the Web, What We Learned from It, and What We Need to Do Next
By Cory Doctorow

EFF has been fighting against DRM and the laws behind it for a decade and a half, intervening in the US Broadcast Flag, the UN Broadcasting Treaty, the European DVB CPCM standard, the W3C EME standard and many other skirmishes, battles and even wars over the years. With that long history behind us, there are two things we want you to know about DRM:

  1. Everybody on the inside secretly knows that DRM technology is irrelevant, but DRM law is everything; and
  2. The reason companies want DRM has nothing to do with copyright.

These two points have just been demonstrated in a messy, drawn-out fight over the standardization of DRM in browsers, and since we threw a lot of blood and treasure at that fight, one thing we hope to salvage is an object lesson that will drive these two points home and provide a roadmap for the future of DRM fighting.

DRM IS TECHNOLOGICALLY BANKRUPT; DRM LAW IS DEADLY

Here’s how DRM works, at a high level: a company wants to provide a customer (you) with digital asset (like a movie, a book, a song, a video game or an app), but they want to control what you do with that file after you get it.

So they encrypt the file. We love encryption. Encryption works. With relatively little effort, anyone can scramble a file so well that no one will ever be able to decrypt it unless they’re provided with the key.

Let’s say this is Netflix. They send you a movie that’s been scrambled and they want to be sure you can’t save it and watch it later from your hard-drive. But they also need to give you a way to view the movie, too. At some point, that means unscrambling the movie. And there’s only one way to unscramble a file that’s been competently encrypted: you have to use the key.

So Netflix also gives you the unscrambling key.

But if you have the key, you can just unscramble the Netflix movies and save them to your hard drive. How can Netflix give you the key but control how you use it?

Netflix has to hide the key, somewhere on your computer, like in a browser extension or an app. This is where the technological bankruptcy comes in. Hiding something well is hard. Hiding something well in a piece of equipment that you give to your adversary to take away with them and do anything they want with is impossible.

Maybe you can’t find the keys that Netflix hid in your browser. But someone can: a bored grad student with a free weekend, a self-taught genius decapping a chip in their basement, a competitor with a full-service lab. One tiny flaw in any part of the fragile wrapping around these keys, and they’re free.

And once that flaw is exposed, anyone can write an app or a browser plugin that does have a save button. It’s game over for the DRM technology. (The keys escape pretty regularly, just as fast as they can be revoked by the DRM companies.)

DRM gets made over the course of years, by skilled engineers, at a cost of millions of dollars. It gets broken in days, by teenagers, with hobbyist equipment. That’s not because the DRM-makers are stupid, it’s because they’re doing something stupid.

Which is where the law comes in. DRM law gives rightsholders more forceful, far-ranging legal powers than laws governing any other kind of technology. In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), whose Section 1201 provides for felony liability for anyone commercially engaged in bypassing a DRM system: 5 years in prison and a $500,000 fine for a first offense. Even noncommercial bypass of DRM is subject to liability. It also makes it legally risky to even talk about how to bypass a DRM system.

So the law shores up DRM systems with a broad range of threats. If Netflix designs a video player that won’t save a video unless you break some DRM, they now have the right to sue — or sic the police — on any rival that rolls out an improved alternative streaming client, or a video-recorder that works with Netflix. Such tools wouldn’t violate copyright law any more than a VCR or a Tivo does, but because that recorder would have to break Netflix DRM, they could use DRM law to crush it.

DRM law goes beyond mere bans on tampering with DRM. Companies also use Section 1201 of the DMCA to threaten security researchers who discover flaws in their products. The law becomes a weapon they can aim at anyone who wants to warn their customers (still you) that the products you’re relying on aren’t fit for use. That includes warning people about flaws in DRM that expose them to being hacked.

It’s not just the USA and not just the DMCA, either. The US Trade Representative has “convinced” countries around the world to adopt a version of this rule.

DRM HAS NOTHING TO DO WITH COPYRIGHT

DRM law has the power to do untold harm. Because it affords corporations the power to control the use of their products after sale, the power to decide who can compete with them and under what circumstances, and even who gets to warn people about defective products, DRM laws represent a powerful temptation.

Some things that aren’t copyright infringement: buying a DVD while you’re on holiday and playing it when you get home. It is obviously not a copyright infringement to go into a store in (say) New Delhi and buy a DVD and bring it home to (say) Topeka. The rightsholder made their movie, sold it to the retailer, and you paid the retailer the asking price. This is the opposite of copyright infringement. That’s paying for works on the terms set by the rightsholder. But because DRM stops you from playing out-of-region discs on your home player, the studios can invoke copyright law to decide where you can consume the copyrighted works you’ve bought, fair and square.

Other not-infringements: fixing your car (GM uses DRM to control who can diagnose an engine, and to force mechanics to spend tens of thousands of dollars for diagnostic information they could otherwise determine themselves or obtain from third parties); refilling an ink cartridge (HP pushed out a fake security update that added DRM to millions of inkjet printers so that they’d refuse remanufactured or third-party cartridges), or toasting home-made bread (though this hasn’t happened yet, there’s no reason that a company couldn’t put DRM in its toasters to control whose bread you can use).

It’s also not a copyright infringement to watch Netflix in a browser that Netflix hasn’t approved. It’s not a copyright infringement to record a Netflix movie to watch later. It’s not a copyright infringement to feed a Netflix video to an algorithm that can warn you about upcoming strobe effects that can trigger life-threatening seizures in people with photosensitive epilepsy.

WHICH BRINGS US TO THE W3C

The W3C is the world’s foremost open web standards body, a consortium whose members (companies, universities, government agencies, civil society groups and others) engage in protracted wrangles over the best way for everyone to deliver web content. They produce “recommendations” (W3C-speak for “standards”) that form the invisible struts that hold up the web. These agreements, produced through patient negotiation and compromise, represent an agreement by major stakeholders about the best (or least-worst) way to solve thorny technological problems.

In 2013, Netflix and a few other media companies convinced the W3C to start work on a DRM system for the web. This DRM system, Encrypted Media Extensions (EME), represented a sharp departure from the W3C’s normal business. First, EME would not be a complete standard: the organization would specify an API through which publishers and browser vendors would make DRM work, but the actual “content decryption module” (CDM) wouldn’t be defined by the standard. That means that EME was a standard in name only: if you started a browser company and followed all the W3C’s recommendations, you still wouldn’t be able to play back a Netflix video. For that, you’d need Netflix’s permission.

It’s hard to overstate how weird this is. Web standards are about “permissionless interoperability.” The standards for formatting text mean that anyone can make a tool that can show you pages from the New York Times‘ website; images from Getty; or interactive charts on Bloomberg. The companies can still decide who can see which pages on their websites (by deciding who gets a password and which parts of the website each password unlocks), but they don’t get to decide who can make the web browsing program you type the password into in order to access the website.

A web in which every publisher gets to pick and choose which browsers you can use to visit their sites is a very different one from the historical web. Historically, anyone could make a new browser by making sure it adhered to W3C recommendations, and then start to compete. And while the web has always been dominated by a few browsers, which browsers dominate have changed every decade or so, as new companies and even nonprofits like Mozilla (who make Firefox) overthrew the old order. Technologies that have stood in the way of this permissionless interoperabilty — for instance, patent-encumbered video — have been seen as impediments to the idea of the open web, not standardization opportunities.

When the W3C starts making technologies that only work when they’re blessed by a handful of entertainment companies, they’re putting their thumbs — their fists — on the scales in favor of ensuring that the current browser giants get to enjoy a permanent reign.

But that’s the least of it. Until EME, W3C standards were designed to give the users of the web (e.g. you) more control over what your computer did while you were accessing other peoples’ websites. With EME — and for the first time ever — the W3C is designing technology that takes away your control. EME is designed to allow Netflix — and other big companies — to decide what your browser does, even (especially) when you disagree about what that should be.

Since the earliest days of computing, there’s been a simmering debate about whether computers exist to control their users, or vice versa (as the visionary computer scientist and education specialist Seymour Papert put it, “children should be programming the computer rather than being programmed by it” — that applies equally well to adults. Every W3C standard until 2017 was on the side of people controlling computers. EME breaks with that. It is a subtle, but profound shift.

WHY WOULD THE W3C DO THIS?

Ay yi yi. That is the three billion user question.

The W3C version of the story goes something like this. The rise of apps has weakened the web. In the pre-app days, the web was the only game in town, so companies had to play by web rules: open standards, open web. But now that apps exist and nearly everyone uses them, big companies can boycott the web, forcing their users into apps instead. That just accelerates the rise of apps, and weakens the web even more. Apps are used to implement DRM, so DRM-using companies are moving to apps. To keep entertainment companies from killing the web outright, the Web must have DRM too.

Even if those companies don’t abandon the web altogether, continues this argument, getting them to make their DRM at the W3C is better than letting them make it on an ad-hoc basis. Left to their own devices, they could make DRM that made no accommodations for people with disabilities, and without the W3C’s moderating influence, these companies would make DRM that would be hugely invasive of web users’ privacy.

The argument ends with a broad justification for DRM: companies have the right to protect their copyrights. We can’t expect an organization to spend fortunes creating or licensing movies and then distribute them in a way that lets anyone copy and share them.

We think that these arguments don’t hold much water. The web does indeed lack some of its earlier only-game-in-town muscle, but the reality is that companies make money by going where their customers are, and every potential customer has a browser, while only existing customers have a company’s apps. The more hoops a person has to jump through in order to become your customer, the fewer customers you’ll have. Netflix is in a hyper-competitive market with tons of new entrants (e.g. Disney), and being “that streaming service you can’t use on the web” is a serious deficit.

We also think that the media companies and tech companies would struggle to arrive at a standard for DRM outside of the W3C, even a really terrible one. We’ve spent a lot of time in the smoke-filled rooms of DRM standardization and the core dynamic there is the media companies demanding full-on lockdown for every frame of video, and tech companies insisting that the best anyone can hope for is an ineffectual “speed-bump” that they hope will mollify the media companies. Often as not, these negotiations collapse under their own weight.

Then there’s the matter of patents: companies that think DRM is a good idea also love software patents, and the result is an impenetrable thicket of patents that make getting anything done next to impossible. The W3C’s patent-pooling mechanism (which is uniquely comprehensive in the standards world and stands as an example of the best way to do this sort of thing) was essential to making DRM standardization possible. What’s more, there are key players in the DRM world, like Adobe, who hold significant patent portfolios but are playing an ever-dwindling role in the world of DRM (the avowed goal of EME was to “kill Flash”). If the companies involved had to all sit down and negotiate a new patent deal without the W3C’s framework, any of these companies could “turn troll” and insist that all the rest would have to shell out big dollars to license their patents — they have nothing to lose by threatening the entire enterprise, and everything to gain from even a minuscule per-user royalty for something that will be rolled out into three billion browsers.

Finally, there’s no indication that EME had anything to do with protecting legitimate business interests. Streaming video services like Netflix rely on customers to subscribe to a whole library with constantly added new materials and a recommendation engine to help them navigate the catalog.

DRM for streaming video is all about preventing competition, not protecting copyrights. The purpose of DRM is to give companies the legal tools to prevent activities that would otherwise be allowed. The DRM part doesn’t have to “work” (in the sense of preventing copyright infringement) so long as it allows for the invocation of the DMCA.

To see how true this is, just look at Widevine, Google’s version of EME. Google bought the company that made Widevine in 2010, but it wasn’t until 2016 that an independent researcher actually took a close look at how well it prevented videos from leaking. That researcher, David Livshits found that Widevine was trivial to circumvent, and it had been since its inception, and that the errors that made Widevine so ineffective were obvious to even a cursory examination. If the millions of dollars and the high-power personnel committed to EME were allocated to create a technology that would effectively prevent copyright infringement, then you’d think that Netflix or one of the other media companies in the negotiations would have diverted some of those resources to a quick audit to make sure that the stuff actually worked as advertised.

(Funny story: Livshits is an Israeli at Ben Gurion University, and Israel happens to be the rare country that doesn’t ban breaking DRM, meaning that Israelis are among the only people who can do this kind of research without fear of legal retaliation)

But the biggest proof that EME was just a means to shut down legitimate competitors — and not an effort to protect copyright — is what happened next.

A CONTROLLED EXPERIMENT

When EFF joined the W3C, our opening bid was “Don’t make DRM.”

We put the case to the organization, describing the way that DRM interferes with the important copyright exceptions (like those that allow people to record and remix copyrighted works for critical or transformative purposes) and the myriad problems presented by the DMCA and laws like it around the world.

The executive team of the W3C basically dismissed all arguments about fair use and user rights in copyright as a kind of unfortunate casualty of the need to keep Netflix from ditching the web in favor of apps, and as for the DMCA, they said that they couldn’t do anything about this crazy law, but they were sure that the W3C’s members were not interested in abusing the DMCA, they just wanted to keep their high-value movies from being shared on the internet.

So we changed tack, and proposed a kind of “controlled experiment” to find out what the DRM fans at the W3C were trying to accomplish.

The W3C is a consensus body: it makes standards by getting everyone in a room to compromise, moving toward a position that everyone can live with. Our ideal world was “No DRM at the W3C,” and DRM is a bad enough idea that it was hard to imagine much of a compromise from there.

But after listening closely to the DRM side’s disavowals of DMCA abuse, we thought we could find something that would represent an improvement on the current status quo and that should fit with their stated views.

We proposed a kind of DRM non-aggression pact, through which W3C members would promise that they’d only sue people under laws like DMCA 1201 if there was some other law that had been broken. So if someone violates your copyright, or incites someone to violate your copyright, or interferes with your contracts with your users, or misappropriates your trade secrets, or counterfeits your trademarks, or does anything else that violates your legal rights, you can throw the book at them.

But if someone goes around your DRM and doesn’t violate any other laws, the non-aggression pact means that you couldn’t use the W3C standardised DRM as a route to legally shut them down. That would protect security researchers, it would protect people analyzing video to add subtitles and other assistive features, it would protect archivists who had the legal right to make copies, and it would protect people making new browsers.

If all you care about is making an effective technology that prevents lawbreaking, this agreement should be a no-brainer. For starters, if you think DRM is an effective technology, it shouldn’t matter if it’s illegal to criticize it.

And since the nonaggression pact kept all other legal rights intact, there was no risk that agreeing to it would allow someone to break the law with impunity. Anyone who violated copyrights (or any other rights) would be square in the DMCA’s crosshairs, and companies would have their finger on the trigger.

NOT SURPRISED BUT STILL DISAPPOINTED

Of course, they hated this idea.

The studios, the DRM vendors and the large corporate members of the W3C participated in a desultory, brief “negotiation” before voting to terminate further discussion and press on. The W3C executive helped them dodge discussions, chartering further work on EME without any parallel work on protecting the open web, even as opposition within the W3C mounted.

By the time the dust settled, EME was published after the most divided votes the W3C had ever seen, with the W3C executive unilaterally declaring that issues for security research, accessibility, archiving and innovation had been dealt with as much as they could be (despite the fact that literally nothing binding was done about any of these things). The “consensus” process of the W3C has so thoroughly hijacked that EME’s publication was only supported by 58% of the members who voted in the final poll, and many of those members expressed regret that they were cornered into voting for something they objected to.

When the W3C executive declared that any protections for the open web were incompatible with the desires of the DRM-boosters, it was a kind of ironic vindication. After all, this is where we’d started, with EFF insisting that DRM wasn’t compatible with security disclosures, with accessibility, with archiving or innovation. Now, it seemed, everyone agreed.

What’s more, they all implicitly agreed that DRM wasn’t about protecting copyright. It was about using copyright to seize other rights, like the right to decide who could criticize your product — or compete with it.

DRM’s sham cryptography means that it only works if you’re not allowed to know about its defects. This proposition was conclusively proved when a W3C member proposed that the Consortium should protect disclosures that affected EME’s “privacy sandbox” and opened users to invasive spying, and within minutes, Netflix’s representative said that even this was not worth considering.

In a twisted way, Netflix was right. DRM is so fragile, so incoherent, that it is simply incompatible with the norms of the marketplace and science, in which anyone is free to describe their truthful discoveries, even if they frustrate a giant company’s commercial aspirations.

The W3C tacitly admitted this when they tried to convene a discussion group to come up with some nonbinding guidelines for when EME-using companies should use the power of DRM law to punish their critics and when they should permit the criticism.

“RESPONSIBLE DISCLOSURE” ON OUR TERMS, OR JAIL

They called this “responsible disclosure,” but it was far from the kinds of “responsible disclosure” we see today. In current practice, companies offer security researchers enticements to disclose their discoveries to vendors before going public. These enticements range from bug-bounty programs that pay out cash, to leaderboards that provide glory to the best researchers, to binding promises to act on disclosures in a timely way, rather than crossing their fingers, sitting on the newly discovered defects, and hoping no one else re-discovers them and exploits them.

The tension between independent security researchers and corporations is as old as computing itself. Computers are hard to secure, thanks to their complexity. Perfection is elusive. Keeping the users of networked computers safe requires constant evaluation and disclosure, so that vendors can fix their bugs and users can make informed decisions about which systems are safe enough to use.

But companies aren’t always the best stewards of bad news about their own products. As researchers have discovered — the hard way — telling a company about its mistakes may be the polite thing to do, but it’s very risky behavior, apt to get you threatened with legal reprisals if you go public. Many’s the researcher who told a company about a bug, only to have the company sit on that news for an intolerably long time, putting its users at risk. Often, these bugs only come to light when they are independently discovered by bad actors, who figure out how to exploit them, turning them into attacks that compromise millions of users, so many that the bug’s existence can no longer be swept under the rug.

As the research world grew more gunshy about talking to companies, companies were forced to make real, binding assurances that they would honor the researchers’ discoveries by taking swift action in a defined period, by promising not to threaten researchers over presenting their findings, and even by bidding for researchers’ trust with cash bounties. Over the years, the situation has improved, with most big companies offering some kind of disclosure program.

But the reason companies offer those bounties and assurances is that they have no choice. Telling the truth about defective products is not illegal, so researchers who discover those truths are under no obligation to play by companies’ rules. That forces companies to demonstrate their goodwill with good conduct, binding promises and pot-sweeteners.

Companies definitely want to be able to decide who can tell the truth about their products and when. We know that because when they get the chance to flex that muscle, they flex it. We know it because they said so at the W3C. We know it because they demanded that they get that right as part of the DRM package in EME.

Of all the lows in the W3C DRM process, the most shocking was when the historic defenders of the open web tried to turn an effort to protect the rights of researchers to warn billions of people about harmful defects in their browsers into an effort to advise companies on when they should hold off on exercising that right — a right they wouldn’t have without the W3C making DRM for the web.

DRM IS THE OPPOSITE OF SECURITY

From the first days of the DRM fight at the W3C, we understood that the DRM vendors and the media companies they supplied weren’t there to protect copyright, they were there to grab legally enforceable non-copyright privileges. We also knew that DRM was incompatible with security research: because DRM relies on obfuscation, anyone who documents how DRM works also makes it stop working.

This is especially clear in terms of what wasn’t said at the W3C: when we proposed that people should be able to break DRM to generate subtitles or conduct security audits, the arguments were always about whether that was acceptable, but it was never about whether it was possible.

Recall that EME is supposed to be a system that helps companies ensure that their movies aren’t saved to their users’ hard-drives and shared around the internet. For this to work, it should be, you know, hard to do that.

But in every discussion of when people should be allowed to break EME, it was always a given that anyone who wanted to could do so. After all, when you hide secrets in software you give to people who you want to keep them secret from, you are probably going to be disappointed.

From day one, we understood that we would arrive at a point in which the DRM advocates at the W3C would be obliged to admit that the survival of their plan relied on being able to silence people who examined their products.

However, we did hold out hope that when this became clear to everyone, that they would understand that DRM couldn’t peacefully co-exist with the open web.

We were wrong.

THE W3C IS THE CANARY IN THE COALMINE

The success of DRM at the W3C is a parable about market concentration and the precarity of the open web. Hundreds of security researchers lobbied the W3C to protect their work, UNESCO publicly condemned the extension of DRM to the web, and the many crypto-currency members of the W3C warned that using browsers for secure, high-stakes applications like moving around peoples’ life-savings could only happen if browsers were subjected to the same security investigations as every other technology in our life (except DRM technologies).

There is no shortage of businesses that want to be able to control what their customers and competitors do with their products. When the US Copyright Office held hearings on DRM in 2015, they heard about DRM in medical implants and cars, farm equipment and voting machines. Companies have discovered that adding DRM to their products is the most robust way to control the marketplace, a cheap and reliable way to convert commercial preferences about who can repair, improve, and supply their products into legally enforceable rights.

The marketplace harms from this anti-competitive behavior are easy to see. For example, the aggressive use of DRM to prevent independent repair shops ends up diverting tons of e-waste to landfill or recycling, at the cost of local economies and the ability of people to get full use out of your property. A phone that you recycle instead of repairing is a phone you have to pay to replace — and repair creates many more jobs than recycling (recycling a ton of e-waste creates 15 jobs; repairing it creates 150 jobs). Repair jobs are local, entrepreneurial jobs, because you don’t need a lot of capital to start a repair shop, and your customers want to bring their gadgets to someone local for service (no one wants to send a phone to China for repairs — let alone a car!).

But those economic harms are only the tip of the iceberg. Laws like DMCA 1201 incentivize DRM by promising the power to control competition, but DRM’s worst harms are in the realm of security. When the W3C published EME, it bequeathed to the web an unauditable attack-surface in browsers used by billions of people for their most sensitive and risky applications. These browsers are also the control panels for the Internet of Things: the sensor-studded, actuating gadgets that can see us, hear us, and act on the physical world, with the power to boil, freeze, shock, concuss, or betray us in a thousand ways.

The gadgets themselves have DRM, intended to lock our repairs and third-party consumables, meaning that everything from your toaster to your car is becoming off-limits to scrutiny by independent researchers who can give you unvarnished, unbiased assessments of the security and reliability of these devices.

In a competitive market, you’d expect non-DRM options to proliferate in answer to this bad behavior. After all, no customer wantsDRM: no car-dealer ever sold a new GM by boasting that it was a felony for your favorite mechanic to fix it.

But we don’t live in an a competitive market. Laws like DMCA 1201 undermine the competition that might counter their worst effects.

The companies that fought DRM at the W3C — browser vendors, Netflix, tech giants, the cable industry — all trace their success to business strategies that shocked and outraged established industry when they first emerged. Cable started as unlicensed businesses that retransmitted broadcasts and charged for it. Apple’s dominance started with ripping CDs and ignoring the howls of the music industry (just as Firefox got where it is by blocking obnoxious ads and ignoring the web-publishers who lost millions as a result). Of course, Netflix’s revolutionary red envelopes were treated as a form of theft.

These businesses started as pirates and became admirals, and treat their origin stories as legends of plucky, disruptive entrepreneurs taking on a dinosauric and ossified establishment. But they treat any disruption aimed at them as an affront to the natural order of things. To paraphrase Douglas Adams, any technology invented in your adolescence is amazing and world-changing; anything invented after you turn 30 is immoral and needs to be destroyed.

LESSONS FROM THE W3C

Most people don’t understand the risks of DRM. The topic is weird, technical, esoteric and take too long to explain. The pro-DRM side wants to make the debate about piracy and counterfeiting, and those are easy stories to tell.

But people who want DRM don’t really care about that stuff, and we can prove it: just ask them if they’d be willing to promise not to use the DMCA unless someone is violating copyright, and watch them squirm and weasel about why policing copyright involves shutting down competitive activities that don’t violate copyright. Point out that they didn’t even question whether someone could break their DRM, because, of course, DRM is so technologically incoherent that it only works if it’s against the law to understand how it works, and it can be defeated just by looking closely at it.

Ask them to promise not to invoke the DMCA against people who have discovered defects in their products and listen to them defend the idea that companies should get a veto over publication of true facts about their mistakes and demerits.

These inconvenient framings at least establish what we’re fighting about, dispensing with the disingenuous arguments about copyright and moving on to the real issues: competition, accessibility, security.

This won’t win the fight on its own. These are still wonky and nuanced ideas.

One thing we’ve learned from 15-plus years fighting DRM: it’s easier to get people to take notice of procedural issues than substantive ones. We labored in vain to get people to take notice of the Broadcasting Treaty, a bafflingly complex and horribly overreaching treaty from WIPO, a UN specialized agency. No one cared until someone started stealing piles of our handouts and hiding them in the toilets so no one could read them. That was global news: it’s hard to figure out what something like the Broadcast Treaty is about, but it’s easy to call shenanigans when someone tries to hide your literature in the toilet so delegates don’t see the opposing view.

So it was that four years of beating the drum about DRM at the W3C barely broke the surface, but when we resigned from the W3C over the final vote, everyone sat up and took notice, asking how they could help fix things. The short answer is, “It’s too late: we resigned because we had run out of options.

But the long answer is a little more hopeful. EFF is suing the US government to overturn Section 1201 of the DMCA. As we proved at the W3C, there is no appetite for making DRM unless there’s a law like DMCA 1201 in the mix. DRM on its own does nothing except provide an opportunity for competitors to kick butt with innovative offerings that cost less and do more.

The Copyright Office is about to hold fresh hearings about DMCA 1201.

The W3C fight proved that we could shift the debate to the real issues. The incentives that led to the W3C being colonized by DRM are still in play and other organizations will face this threat in the years to come. We’ll continue to refine this tactic there and keep fighting, and we’ll keep reporting on how it goes so that you can help us fight. All we ask is that you keep paying attention. As we learned at the W3C, we can’t do it without you.

November 27, 2017 at 07:23PM
via Deeplinks http://ift.tt/2hWq9iH

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers
By sophia

A federal appeals court has issued an alarming ruling that significantly erodes the Constitution’s protections for anonymous speakers—and simultaneously hands law enforcement a near unlimited power to unmask them.

The Ninth Circuit’s decision in  U.S. v. Glassdoor, Inc. is a significant setback for the First Amendment. The ability to speak anonymously online without fear of being identified is essential because it allows people to express controversial or unpopular views. Strong legal protections for anonymous speakers are needed so that they are not harassed, ridiculed, or silenced merely for expressing their opinions.

In Glassdoor, the court’s ruling ensures that any grand jury subpoena seeking the identities of anonymous speakers will be valid virtually every time. The decision is a recipe for disaster precisely because it provides little to no legal protections for anonymous speakers.

EFF applauds Glassdoor for standing up for its users’ First Amendment rights in this case and for its commitment to do so moving forward. Yet we worry that without stronger legal standards—which EFF and other groups urged the Ninth Circuit to apply (read our brief filed in the case)—the government will easily compel platforms to comply with grand jury subpoenas to unmask anonymous speakers.

The Ninth Circuit Undercut Anonymous Speech by Applying the Wrong Test

The case centers on a federal grand jury in Arizona investigating allegations of fraud by a private contractor working for the Department of Veterans Affairs. The grand jury issued a subpoena to Glassdoor, which operates an online platform that allows current and former employees to comment anonymously about their employers, seeking the identities of eight accounts who posted about the contractor.

Glassdoor challenged the subpoena by asserting its users’ First Amendment rights. When the trial court ordered Glassdoor to comply, the company appealed to the U.S. Court of Appeals for the Ninth Circuit.

The Ninth Circuit ruled that because the subpoena was issued by a grand jury as part of a criminal investigation, Glassdoor had to comply absent evidence that the investigation was being conducted in bad faith.

There are several problems with the court’s ruling, but the biggest is that in adopting a “bad faith” test as the sole limit on when anonymous speakers can be unmasked by a grand jury subpoena, it relied on a U.S. Supreme Court case called Branzburg v. Hayes.

In challenging the subpoena, Glassdoor rightly argued that Branzburg was not relevant because it dealt with whether journalists had a First Amendment right to  protect the identities of their confidential sources in the face of grand jury subpoenas, and more generally, whether journalists have a First Amendment right to gather the news. This case, however, squarely deals with Glassdoor users’ First Amendment right to speak anonymously.

The Ninth Circuit ran roughshod over the issue, calling it “a distinction without a difference.” But here’s the problem: although the law is all over the map as to whether the First Amendment protects journalists’ ability to guard their sources’ identities, there is absolutely no question that the First Amendment grants anonymous speakers the right to protect their identities.

The Supreme Court has repeatedly ruled that the First Amendment protects anonymous speakers, often by emphasizing the historic importance of anonymity in our social and political discourse. For example, many of our founders spoke anonymously while debating the provisions of our Constitution.

Because the Supreme Court in Branzburg did not outright rule that reporters have a First Amendment right to protect their confidential sources, it adopted a rule that requires a reporter to respond to a grand jury subpoena for their source’s identity unless the reporter can show that the investigation is being conducted in bad faith. This is a very weak standard and difficult to prove.

By contrast, because the right to speak anonymously has been firmly established by the Supreme Court and in jurisdictions throughout the country, the tests for when parties can unmask those speakers are more robust and protective of their First Amendment rights. These tests more properly calibrate the competing interests between the government’s need to investigate crime and the First Amendment rights of anonymous speakers.

The Ninth Circuit’s reliance on Branzburg effectively eviscerates any substantive First Amendment protections for anonymous speakers by not imposing any meaningful limitation on grand jury subpoenas. Further, the court’s ruling puts the burden on anonymous speakers—or platforms like Glassdoor standing in their shoes—to show that an investigation is being conducted in bad faith before setting aside the subpoena.

The Ninth Circuit’s reliance on Branzburg is also wrong because the Supreme Court ruling in that case was narrow and limited to the situation involving reporters’ efforts to guard the identities of their confidential sources. As Justice Powell wrote in his concurrence, “I … emphasize what seems to me to be the limited nature of the Court’s ruling.” The standards in that unique case should not be transported to cases involving grand jury subpoenas to unmask anonymous speakers generally. However, that’s what the court has done—expanded Branzburg to now apply in all instances in which a grand jury subpoena targets individuals whose identities are unknown to the grand jury.

Finally, the Ninth Circuit’s use of Branzburg is further improper because there are a number of other cases and legal doctrines that more squarely address how courts should treat demands to pierce anonymity. Indeed, as we discussed in our brief, there is a whole body of law that applies robust standards to unmasking anonymous speakers, including the Ninth Circuit’s previous decision in Bursey v. U.S., which also involved a grand jury.

The Ninth Circuit Failed to Recognize the Associational Rights of Anonymous Online Speakers

The court’s decision is also troubling because it takes an extremely narrow view of the kind of anonymous associations that should be protected by the First Amendment. In dismissing claims by Glassdoor that the subpoena chilled their users’ First Amendment rights to privately associate with others, the court ruled that because Glassdoor was not itself a social or political organization such as the NAACP, the claim was “tenuous.”

There are several layers to the First Amendment right of association, including the ability of individuals to associate with others, the ability of individuals to associate with a particular organization or group, and the ability for a group or organization to maintain the anonymity of members or supporters.

Although it’s true that Glassdoor users are not joining an organization like the NAACP or a union, the court’s analysis ignores that other associational rights are implicated by the subpoena in this case. At minimum, Glassdoor’s online platform offers the potential for individuals to organize and form communities around their shared employment experiences. The First Amendment must protect those interests even if Glassdoor lacks an explicit political goal.

Moreover, even if it’s true that Glassdoor users may not have an explicitly political goal in commenting on their current or past employers, they are still associating online with others with similar experiences to speak honestly about what happens inside companies, what their professional experiences are like, and how they believe those employers can improve.

The risk of being identified as a Glassdoor user is a legitimate one that courts should recognize as analogous to the risks of civil rights groups or unions being compelled to identify their members. Disclosure in both instances chills individuals’ abilities to explore their own experiences, attitudes, and beliefs.

The Ninth Circuit Missed an Opportunity to Vindicate Online Speakers’ First Amendment Rights

Significantly absent from the court’s decision was any real discussion about the value of anonymous speech and its historical role in our country. This is a shame because the case would have been a great opportunity to show the importance of First Amendment protections for online speakers.

EFF has long fought for anonymity online because we know its importance in fostering robust expression and debate. Subpoenas such as the one issued to Glassdoor deter people from speaking anonymously about issues related to their employment. Glassdoor provides a valuable service because its anonymous reviews help inform other people’s career choices while also keeping employers accountable to their workers and potentially the general public.

The Ninth Circuit’s decision appeared unconcerned with this reality, and its “bad faith” standard places no meaningful limit on the use of grand jury subpoenas to unmask anonymous speakers. This will ultimately harm speakers who can now be more easily targeted and unmasked, particularly if they have said something controversial or offensive. 

November 15, 2017 at 02:38AM
via Deeplinks http://ift.tt/2ms7EaN

Suspending the Catalan Parliament, Spain Destroys the EU’s “Rule of Law” Figleaf.

Suspending the Catalan Parliament, Spain Destroys the EU’s “Rule of Law” Figleaf.
By craig

It takes a very special kind of chutzpah systematically to assault voters, and drag them from polling booths by their hair, and then say that a low turnout invalidates the vote. That is the shameless position being taken by the Europe wide political Establishment and its corporate media lackeys. This Guardian article illustrates a refinement to this already extreme act of intellectual dishonesty. It states voter turnout was 43%. That ignores the 770,000 votes which were cast but physically confiscated by the police so they could not be counted. They take voter turnout over 50%.

That is an incredibly high turnout, given that 900 voters were brutalised so badly they needed formal medical treatment. The prospect of being smashed in the face by a club would naturally deter a number of people from voting. The physical closure of polling stations obviously stopped others from voting. It is quite incredible that in these circumstances, over 50% of the electorate did succeed in casting a vote.

To enable this of course required some deviation from norms. People were allowed to vote at any polling station. The right wing German politician from the Bavarian Christian Democrats, Manfred Weber, leads the largest group in the European Parliament, which includes Rajoy’s Popular Party. He was therefore the first speaker in the EU Parliament debate on events in Catalonia, and managed not to mention police violence or human rights at all in his speech. He did however find time to mock the Catalan authorities for making these last minute changes in procedures to voting rules, which he said invalidated the result.

Weber is no stranger to using spurious “legalities” to support the jackbooted oppressor. His party has attempted to close down EU Commission programmes to build schools and clinics for Palestinian children in the occupied West Bank, on the grounds they do not have planning permission from the Israeli authorities.

The obvious answer to the objection of Weber and others on the running of the referendum, is to have another one agreed by all and run in strict accordance with international standards. Yet strangely, despite their complaints about the process, they do not want to have a better process. They rather do not wish people to be allowed to vote at all.

There are however no arguments that the Catalan Parliament was elected in anything but the proper manner. Its suspension by the Spanish Constitutional Court – a body on which 10 out of 12 members are political appointees – is therefore not due to any doubts about the Catalan Parliament’s legitimacy.

No, the Catalan Parliament has been suspended because the Constitutional Court fears it may be about to vote in a way that the Spanish government does not like.

Note that it has not even done this yet. Nobody knows how its members will actually vote, until they vote. The Constitutional Court is suspending a democratically elected body in case it takes a democratic vote of its members.

This makes the EU look pretty silly. It was looking pretty silly anyway. I telephoned the Cabinet today of Frans Timmermans, the EU Commissioner who told the European Parliament that Spain was entitled to use force against the Catalans and it had been proportionate. I spoke to a pleasant young man responsible for the “rule of law and fundamental rights” portfolio in the Cabinet. I got through by using my “Ambassador” title.

Here is the thing. He was genuinely shocked to hear that people thought the Commission’s support for use of force was wrong. He stated that it had not been the intention of Timmermans to say the use of force was proportionate, rather it must be proportionate. He became very agitated and refused to answer when I repeatedly questioned him as to whether he thought the use of force had in fact been proportionate. I suggested to him rather strongly that in refusing to acknowledge the disproportionate use of force, he was in effect lying. I pointed out that Timmermans had supported use of force and said “rule of law” over and over again, but scarcely mentioned human rights.

Here is the thing. It was plain that his shock was genuine, and he had no idea whatsoever of the social media reaction to Timmermans speech. I told him to search Timmermans on twitter and facebook and see for himself, and he agreed to do so. The problem is, these people live in a Brussels bubble where they interact with other Eurocrats and national diplomats, and members of the Establishment media, but have no connection at all to the citizenry of the EU. Nor had he seen the Amnesty International report, which I subsequently emailed him.

The rule of law is not everything. Apartheid was legally enforced in South Africa. Mr Weber’s Nazi antecedents had laws. British colonialism was enforced by laws. Nor is the administration of the law always impartial. Apartheid had its judges. Pinochet had judges to enact his version of the “rule of law”.

Actually all dictators are very big on “the rule of law”.

The most sinister thing Timmermans said to the European Parliament was “There can be no human rights without the rule of law”. Sinister because he did not balance it with “there can be no rule of law without human rights”.

What Spain is attempting now to impose on Catalonia is rule of law without democracy. I am going to be most interested to see how Brussels manages to justify that. We are seeing a whipping up of hatred by a central government against a national and linguistic minority and a suppression of its freedoms and institutions.

The highly politicised Spanish Constitutional Court, in suspending a democratically elected parliament because it does not like its views, has pointed up today that it is not sufficient for the EU to simply parrot “rule of law”. Spain currently has a Francoist Party in power with a Francoist judiciary intent on closing down democracy in Catalonia.

The rule of law within the EU has to stem from democracy, and to respect human rights. Neither is true in Rajoy’s Spain.

————————————————————-

I continue urgently to need contributions to my defence in the libel action against me by Jake Wallis Simons, Associate Editor of Daily Mail online. You can see the court documents outlining the case here. I am threatened with bankruptcy and the end of this blog (not to mention a terrible effect on my young family). Support is greatly appreciated. An astonishing 4,000 people have now contributed a total of over £75,000. But that is still only halfway towards the £140,000 target. I realise it is astonishing that so much money can be needed, but that is the pernicious effect of England’s draconian libel laws, as explained here.


On a practical point, a number of people have said they are not members of Paypal so could not donate. After clicking on “Donate”, just below and left of the “Log In” button is a small “continue” link which enables you to donate by card without logging in.

For those who prefer not to pay online, you can send a cheque made out to me to Craig Murray, 89/14 Holyrood Road, Edinburgh, EH8 8BA. As regular readers know, it is a matter of pride to me that I never hide my address.

The post Suspending the Catalan Parliament, Spain Destroys the EU’s “Rule of Law” Figleaf. appeared first on Craig Murray.

October 5, 2017 at 06:21PM
via Craig Murray http://ift.tt/2z1E9Pr

The killing of history

The killing of history
By

Reporting from New York, John Pilger describes the re-writing of the history of the Vietnam War in the 10-part television series by Ken Burns and Lynn Novick. Millions died "in good faith", they say. And so yet more wars are justified - as President Trump tells the world he is prepared to "totally destroy" North Korea and its 25 million people.

September 21, 2017 at 12:00AM
via JohnPilger.com – the films and journalism of John Pilger http://ift.tt/2xW37m6

I Have Nothing to Hide – Really? Here’s why privacy matters to all of us

I Have Nothing to Hide – Really? Here’s why privacy matters to all of us
By Arne Möhle

The statement “I have nothing to hide” is very popular. But recently reversing this statement has also become very popular: “Give me your bank account login, your email login, your Facebook login.” Most people refuse this instantly, and for a good reason: Everybody has something to hide. To convince everybody – once and for all – let’s take a deep dive into why privacy matters and how everybody can protect their private data easily.

Privacy Is a Basic Human Right

Privacy online and offline is a basic human right not because we have something to hide, but because it protects al  people whether they have something to hide today. You don’t want your neighbor to spy on you, so why should a government or an Internet service be allowed to see and use your data for their own purposes?

Privacy Protects Minorities

Many governments already spy on their citizens to prevent political opposition. Even politicians in Western democracies are increasingly in favor of online surveillance, falsely claiming that this would protect us from terrorist attacks. This is a worrisome development as the right to privacy is crucial when it comes to protecting people with oppositional political views. Autocratic systems around the world show us how dangerous it is to give up our right to privacy – not only for the people affected, but also for a society as a whole: When self-censorship becomes the norm, a true dispute – essential to any democracy – becomes impossible.

Privacy Saves You Money

Companies use your data to show you personalized advertisements. Some people even say they like seeing ads they are interested in, but this form of advertisement is not just invasive, it is also very costly: From online tracking the advertisements company knows exactly what you are looking for, and they more or less know what you are willing to spend. Because of all the data they have accumulated about you and about lots of other Internet users matching your browsing profile, they will not show you the best deal available. Instead they will show you very targeted advertisements that will very likely make you pay more than you should have.

Privacy Is Safety

The Internet is a great place where we can share every idea freely. However, there are a lot of criminals active online, whose only goal is to steal your identity by gaining access to online accounts such as email, Paypal, or Facebook. It is important to keep your online identity secure and protect it from malicious attacks so that no one can use your accounts to steal money.

Companies Must Protect Privacy

The latest Equifax hack is a prime example of how a company should not handle people’s data. Private information must always be securely encrypted so that a potential attacker has no chance of stealing personal information of millions of people. That’s also why a backdoor to encrypted services is never an option. Any backdoor will sooner or later be abused by criminals.  

Data Is the Currency of the 21st Century

The problem today is that data is of high value to most online services. As many offer their services for free, their business model depends on gathering users’ data, profiling them and posting targeted ads, or selling the data on to advertisers. This process is only designed to serve one purpose: Make as much money for the company involved as possible. Protection of people’s privacy is only a hassle that costs money – so nothing these companies would want to look after. For this reason, data leaks like the latest Equifax hack are becoming so numerous lately. Companies simply don’t care enough to adequately protect their users’ data against attackers.

People Must Protect Their Privacy Themselves

It would be desirable that this changes, that companies protect their users’ data with strong encryption. However, this costs money, so unless the users’ force companies to protect their data, they will never do it. Fortunately, users have more power than they think: By choosing privacy-friendly services that fully protect their data with encryption, they are forcing all companies to understand our right to privacy what it is: a key selling feature.

How to Protect Your Data

You can make a change today by switching to privacy-friendly, encrypted services. Here are some suggestions:

* Use VPN encryption to protect your Internet traffic such as PIA.

* Use encrypted mail such as Tutanota.

* Use private search engines such as Qwant.

* Use encrypted chat apps.

By making a switch today, you’ll stop the Internet spies from abusing your data! On top of that you fight along with us for our right to privacy – not only to protect your data, but also to protect our democracy.

The post I Have Nothing to Hide – Really? Here’s why privacy matters to all of us appeared first on Privacy Online News.

September 20, 2017 at 02:00PM
via Privacy Online News http://ift.tt/2wyJ8K4

The Cybercrime Convention’s New Protocol Needs to Uphold Human Rights

The Cybercrime Convention’s New Protocol Needs to Uphold Human Rights
By danny

As part of an ongoing attempt to help law enforcement obtain data across international borders, the Council of Europe’s Cybercrime Convention— finalized in the weeks following 9/11, and ratified by the United States and over 50 countries around the world—is back on the global lawmaking agenda. This time, the Council’s Cybercrime Convention Committee (T-CY) has initiated a process to draft a second additional protocol to the Convention—a new text which could allow direct foreign law enforcement access to data stored in other countries’ territories. EFF has joined EDRi and a number of other organizations in a letter to the Council of Europe, highlighting some anticipated concerns with the upcoming process and seeking to ensure civil society concerns are considered in the new protocol. This new protocol needs to preserve the Council of Europe’s stated aim to uphold human rights, and not undermine privacy, and the integrity of our communication networks.

How the Long Arm of Law Reaches into Foreign Servers

Thanks to the internet, individuals and their data increasingly reside in different jurisdictions: your email might be stored on a Google server in the United States, while your shared Word documents might be stored by Microsoft in Ireland. Law enforcement agencies across the world have sought to gain access to this data, wherever it is held. That means police in one country frequently seek to extract personal, private data from servers in another.

Currently, the primary international mechanism for facilitating governmental cross border data access is the Mutual Legal Assistance Treaty (MLAT) process, a series of treaties between two or more states that create a formal basis for cooperation between designated authorities of signatories. These treaties typically include some safeguards for privacy and due process, most often the safeguards of the country that hosts the data.

The MLAT regime includes steps to protect privacy and due process, but frustrated agencies have increasingly sought to bypass it, by either cross-border hacking, or leaning on large service providers in foreign jurisdictions to hand over data voluntarily.

The legalities of cross-border hacking remain very murky, and its operation is the very opposite of transparent and proportionate. Meanwhile, voluntary cooperation between service providers and law enforcement occurs outside the MLAT process and without any clear accountability framework. The primary window of insight into its scope and operation is the annual Transparency Reports voluntarily issued by some companies such as Google and Twitter.

Hacking often blatantly ignores the laws and rights of a foreign state, but voluntary data handovers can be used to bypass domestic legal protections too.  In Canada, for example, the right to privacy includes rigorous safeguards for online anonymity: private Internet companies are not permitted to identify customers without prior judicial authorization. By identifying often sensitive anonymous online activity directly through the voluntary cooperation of a foreign company not bound by Canadian privacy law, law enforcement agents can effectively bypass this domestic privacy standard.

Faster, but not Better: Bypassing MLAT

The MLAT regime has been criticized as slow and inefficient. Law enforcement officers have claimed that have to wait anywhere between 6-10 months—the reported average time frame for receiving data through an MLAT request—for data necessary to their local investigation. Much of this delay, however, is attributable to a lack of adequate resources, streamlining and prioritization for the huge increase in MLAT requests for data held the United States, plus the absence of adequate training for law enforcement officers seeking to rely on another state’s legal search and seizure powers.

Instead of just working to make the MLAT process more effective, the T-CY committee is seeking to create a parallel mechanism for cross-border cooperation. While the process is still in its earliest stages, many are concerned that the resulting proposals will replicate many of the problems in the existing regime, while adding new ones.

What the New Protocol Might Contain

The Terms of Reference for the drafting of this new second protocol reveal some areas that may be included in the final proposal.

Simplified mechanisms for cross border access

T-CY has flagged a number of new mechanisms it believes will streamline cross-border data access. The terms of reference mention a simplified regime’ for legal assistance with respect to subscriber data. Such a regime could be highly controversial if it compelled companies to identify anonymous online activity without prior judicial authorization. The terms of reference also envision the creation of “international production orders.”. Presumably these would be orders issued by one court under its own standards, but that must be respected by Internet companies in other jurisdictions. Such mechanisms could be problematic where they do not respect the privacy and due process rights of both jurisdictions.

Direct cooperation

The terms of reference also call for “provisions allowing for direct cooperation with service providers in other jurisdictions with regard to requests for [i] subscriber information, [ii] preservation requests, and [iii] emergency requests.” These mechanisms would be permissive, clearing the way for companies in one state to voluntarily cooperate with certain types of requests issued by another, and even in the absence of any form of judicial authorization.

Each of the proposed direct cooperation mechanisms could be problematic. Preservation requests are not controversial per se. Companies often have standard retention periods for different types of data sets. Preservation orders are intended to extend these so that law enforcement have sufficient time to obtain proper legal authorization to access the preserved data. However, preservation should not be undertaken frivolously. It can carry an accompanying stigma, and exposes affected individuals’ data to greater risk if a security breach occurs during the preservation period. This is why some jurisdictions require reasonable suspicion and court orders as requirements for preservation orders.

Direct voluntary cooperation on emergency matters is challenging as well. While in such instances, there is little time to engage the judicial apparatus and most states recognize direct access to private customer data in emergency situations, such access can still be subject to controversial overreach. This potential for overreach–and even abuse–becomes far higher where there is a disconnect between standards in requesting and responding jurisdictions.

Direct cooperation in identifying customers can be equally controversial. Anonymity is critical to privacy in digital contexts. Some data protection laws (such as Canada’s federal privacy law) prevent Internet companies from voluntarily providing subscriber data to law enforcement voluntarily.

Safeguards

The terms of reference also envisions the adoption of “safeguards”. The scope and nature of these will be critical. Indeed, one of the strongest criticisms against the original Cybercrime Convention has been its lack of specific protections and safeguards for privacy and other human rights. The EDRi Letter calls for adherence to the Council of Europe’s data protection regime, Convention 108, as a minimum prerequisite to participation in the envisioned regime for cross-border access, which would provide some basis for shared privacy protection. The letter also calls for detailed statistical reporting and other safeguards.

What’s next?

On 18 September, the T-CY Bureau will meet with European Digital Rights Group (EDRI) to discuss the protocol. The first meeting of the Drafting Group will be held on 19 and 20 September. The draft Protocol will be prepared and finalized by the T-CY, in closed session.

Law enforcement agencies are granted extraordinary powers to invade privacy in order to investigate crime. This proposed second protocol to the Cybercrime Convention must ensure that the highest privacy standards and due process protections adopted by signatory states remain intact.

We believe that the Council of Europe T-CY Committee — Netherlands, Romania, Canada, Dominica Republic, Estonia, Mauritius, Norway, Portugal, Sri Lanka, Switzerland, and Ukraine — should concentrate first on fixes to the existing MLAT process, and they should ensure that this new initiative does not become an exercise in harmonization to the lowest denominator of international privacy protection. We’ll be keeping track of what happens next.

September 19, 2017 at 12:10AM
via Deeplinks http://ift.tt/2xMlOIR

Attack on CCleaner Highlights the Importance of Securing Downloads and Maintaining User Trust

Attack on CCleaner Highlights the Importance of Securing Downloads and Maintaining User Trust
By gennie

Some of the most worrying kinds of attacks are ones that exploit users’ trust in the systems and softwares they use every day. Yesterday, Cisco’s Talos security team uncovered just that kind of attack in the computer cleanup software CCleaner. Download servers at Avast, the company that owns CCleaner, had been compromised to distribute malware inside CCleaner 5.33 updates for at least a month. Avast estimates that over 2 million users downloaded the affected update. Even worse, CCleaner’s popularity with journalists and human rights activists means that particularly vulnerable users are almost certainly among that number. Avast has advised CCleaner Windows users to update their software immediately.

This is often called a “supply chain” attack, referring to all the steps software takes to get from its developers to its users. As more and more users get better at bread-and-butter personal security like enabling two-factor authentication and detecting phishing, malicious hackers are forced to stop targeting users and move “up” the supply chain to the companies and developers that make software. This means that developers need to get in the practice of “distrusting” their own  infrastructure to ensure safer software releases with reproducible builds, allowing third parties to double-check whether released binary and source packages correspond. The goal should be to secure internal development and release infrastructure to that point that no hijacking, even from a malicious actor inside the company, can slip through unnoticed.

The harms of this hack extend far beyond the 2 million users who were directly affected. Supply chain attacks undermine users’ trust in official sources, and take advantage of the security safeguards that users and developers rely on. Software updates like the one Avast released for CCleaner are typically signed with the developer’s un-spoof-able cryptographic key. But the hackers appear to have penetrated Avast’s download servers before the software update was signed, essentially hijacking Avast’s update distribution process and punishing users for the security best practice of updating their software.

Despite observations that these kind of attack are on the rise, the reality is that they remain extremely rare when compared to other kinds of attacks users might encounter. This and other supply chain attacks should not deter users from updating their software. Like any security decision, this is a trade-off: for every attack that might take advantage of the supply chain, there are one hundred attacks that will take advantage of users not updating their software.

For users, sticking with trusted, official software sources and updating your software whenever prompted remains the best way to protect yourself from software attacks. For developers and software companies, the attack on CCleaner is a reminder of the importance of securing every link of the download supply chain.

September 19, 2017 at 08:16PM
via Deeplinks http://ift.tt/2wEoIu6