HTML5 DRM finally makes it as an official W3C Recommendation

HTML5 DRM finally makes it as an official W3C Recommendation
By Peter Bright

Enlarge (credit: Floyd Wilde)

The World Wide Web Consortium (W3C), the industry body that oversees development of HTML and related Web standards, has today published the Encrypted Media Extensions (EME) specification as a Recommendation, marking its final blessing as an official Web standard. Final approval came after the W3C’s members voted 58.4 percent to approve the spec, 30.8 percent to oppose, with 10.8 percent abstaining.

EME provides a standard interface for DRM protection of media delivered through the browser. EME is not itself a DRM scheme; rather, it defines how Web content can work with third-party Content Decryption Modules (CDMs) that handle the proprietary decryption and rights-management portion.

The development of EME has been contentious. There are broad ideological and legal concerns; some groups, such as the Free Software Foundation, oppose any and all DRM in any context or application. Some do not object to DRM, per se, but are concerned by regulations such as the US’ Digital Millennium Copyright Act (DMCA). Under the DMCA, bypassing DRM is outlawed, even if the bypass is intended to enable activities that are otherwise legal. These concerns are particularly acute in the context of the Web; for many the Web should be open, without any kind of technological restrictions on what can be done with Web content. The protection that DRM offers is seen as anathema to this. Moreover, while browsers themselves can be fully open source, CDMs are built using proprietary, secret code with no source available.

Read 5 remaining paragraphs | Comments

September 18, 2017 at 08:25PM
via Ars Technica UK http://ift.tt/2jEZECd

Avast! There’s malware in that CCleaner software update

Avast! There’s malware in that CCleaner software update
By Sean Gallagher

Enlarge

A software package update for a Windows utility product distributed by antivirus vendor Avast has been spreading an unsavory surprise: a malware package that could allow affected computers to be remotely accessed or controlled with what appears to be a legitimate signing certificate. The malware, which was distributed through the update server for the Windows cleanup utility CCleaner, was apparently inserted by an attacker who compromised the software “supply chain” of Piriform, which was acquired by Avast in July. There have been more than 2 billion downloads of CCleaner worldwide, so the potential impact of the malware is huge.

Software updates are increasingly being targeted by distributors of malware, because they provide a virtually unchecked path to infect millions—or even billions—of computers. A compromised software update server for Ukraine software vendor M.E.Doc was used to distribute the NotPetya ransomware attack in July. “Watering hole” attacks, such as the ones used against Facebook, Apple, and Twitter four years ago, are often used to compromise the computers used by software developers. When successful, they can give malware authors what amounts to the keys to the software developer’s kingdom—their compilation tools and signing certificates, as well as access to their workflow for software updates.

In a blog post this morning, Cisco Talos Intelligence’s Edmund Brumaghin, Ross Gibb, Warren Mercer, Matthew Molyett, and Craig Williams reported that Talos had detected the malware during beta testing of a new exploit-detection technology. The malware was part of the signed installer for CCleaner v5.3 and included code that called back to a command-and-control server as well as a domain-generation algorithm intended to find a new C&C server if the hard-coded IP address of the primary server was lost. Copies of the malicious software installer were distributed to CCleaner users between August 15 and September 12, 2017, using a valid certificate issued to Piriform Ltd by Symantec.

Read 4 remaining paragraphs | Comments

September 18, 2017 at 05:29PM
via Ars Technica UK http://ift.tt/2jDm8n1

Me, my data and I: Decode and the future of the personal data economy

Me, my data and I: Decode and the future of the personal data economy
By Glyn Moody

It’s no secret that personal data has become the key commodity of the online business world. The Internet giants – Facebook, Google, etc. – all provide their services “free”, but make money from the detailed profiles they create of our activity as we use social networks and move around the Web. Since we don’t have any choice in whether to allow this if we want to access the services, most people simple accept the practice as an inevitable if regrettable fact of digital life.

But the consequences of doing so are serious. It means most of our activities online are tracked and stored – principally by companies, but also by governments that can draw on that data, using both front and back door access. It means that information about our supposed interests and preferences is fed back into the services to shape the content we see, and the ads that are displayed. It also means that intimate knowledge gleaned from the data can be used to manipulate us in subtle ways. But does it have to be like this? A project funded by the European Union called Decode (DEcentralised Citizen Owned Data Ecosystems) is exploring that question, in the hope that the answer is “no”:

“DECODE is about giving people ownership of their personal data so they can secure their privacy and reclaim their digital sovereignty. It will create new technologies which put them in control of how their data is used so they can decide who has access, and for what purposes. In doing so, DECODE will create a new digital economy ecosystem, enabling in particular the rise of more localised, democratic models for pooling and sharing data. These new technologies will be piloted in Amsterdam and Barcelona. A key principle of this will be the pursuit of social value over purely economic return. It will also enable governments to be more responsive to citizen needs.”

That comes from a major new report released by the Decode team entitled: “Me, my data and I: The future of the personal data economy”. As well as explaining all the problems with the current model of treating personal data as something that can be owned and mined by the digital extractivists like Facebook and Google, the new report does something unusual: it offers an alternative vision for our digital future.

“In 2035, the majority of people now have their own personal data portals. These are in effect small servers, often located in their homes or a secure location of their choosing, which store all their personal data. This gives them control over how this data is used.”

An alternative to servers located in the home is to store all this personal data in the cloud, perhaps fragmented and scattered across multiple server farms in an encrypted form for added security and resilience. But wherever and however it is held, the key element is that personal data remains under the control of the individual at all times. Thereafter, today’s Internet services would be granted access to some of that data in a very controlled, and precise way.

So if a service was only available to those over 18, proof of just that fact would be sent, rather than details such as date of birth, or other unnecessary personal information, a technique known as Attribute Based Credentials. Intelligence could be built in to the personal data stores such that important information would only be released in certain circumstances – for example, health data in a medical emergency. Decode calls these “Smart Rules”. One interesting idea is to combine Smart Rules with distributed ledger technologies, like Bitcoin:

“In the case of DECODE, the ledger will be made up of the permissions which users attach to their personal data as Smart Rules. By storing these rules in a public distributed ledger, the Smart Rules will be highly transparent (in terms of showing where data is and who has had access to it) as well as tamper proof.

It’s said that distributed ledgers’ key characteristics could provide a foundational protocol for a fairer digital identity system on the web. Beyond its application for digital currencies, distributed ledgers could provide a new set of technical standards for transparency, openness and user consent, on top of which a whole new generation of services might be built.”

Putting individuals firmly in control of their own data opens up another possibility: the voluntary aggregation of personal data to create a “data commons”, managed by the community, which can be used for the public good. The idea is that a community of data donors agrees overall rules for how the blended data can be used. Since people are able to establish with others how their data is accessed and analyzed, they are more likely to grant permission than they are today, when they are rightly suspicious of what will happen to highly-personal information that is no longer under their control. The shift to personal data servers could therefore liberate important data, and lead to far wider use of sensitive medical and genomic information, say, with a corresponding increase in breakthroughs and treatments for all.

Decode suggests that there is another, rather unexpected benefit for businesses if they give up their monopoly control of personal data. The report points out that the new EU General Data Protection Regulation (GDPR) that will come into force next year includes extremely harsh financial penalties – up to 4% of global turnover – for companies that fail to protect personal data. Decode believes that it will soon become too risky for companies operating in the EU to hold huge quantities of personal data. Instead, personal data servers that grant appropriate permissions to companies needing information would allow them to operate largely as today, but without the problems that the GDPR and similar legislation will bring.

That’s one of the most important points in this new report. It means that the current tension between companies that want full control of people’s data, and the individuals who want their privacy to be respected, will disappear. Once the dangers of holding personal data on-site outweigh the benefits, the Decode team believes companies will shift across to the new approach based around accessing personal data servers, wherever they might be located.

It’s an optimistic vision, and a necessary one. In the wake of massive data losses, like the recent Equifax disaster, and a growing realization that Facebook is using personal data for some very questionable business deals – for example, selling advertising to a Russian troll farm during the US election – there is growing resistance to the current model. The Decode project not only points the way to a better alternative, but aims to create and release as open source software that will start to turn it into reality.

The post Me, my data and I: Decode and the future of the personal data economy appeared first on Privacy Online News.

September 19, 2017 at 02:38PM
via Privacy Online News http://ift.tt/2hf3e1e

Your phone can now be turned into an ultrasound sonar tracker against you and others

Your phone can now be turned into an ultrasound sonar tracker against you and others
By Rick Falkvinge

New research shows how a mobile phone can be turned into a passive indoor ultrasound sonar, locating people with high precision indoors using multi-target echolocation, and is even able to discern a rough selection of activities. It does this by overlaying imperceptible ultrasound sonar pings into played-back music, measuring the reflections coming back to the phone’s microphone. The privacy implications are staggering.

By emitting inaudible ultrasound pings as part of normal music playback, a phone can be turned into a passive sonar device, researchers from the University of Washington show in a new paper. It can track multiple individuals at an indoor precision of 8 centimeters (3 inches), and detect different types of activity by the people in its detection zone — even through barriers, all using a normal smartphone.

People with military technology background will recognize this as next-generation passive covert radar systems, radar systems which don’t transmit, but which detect objects in the sky from changes to reflection patterns from everpresent civilian transmitters such as radio and TV towers. The primary advantage of passive covert radars is that they can’t be detected, as they only contain very sensitive receivers, no transmitters. This phone research appear to be using the same kind of technology, except it is also used as a transmitter of ultrasound pings; however, it would be trivial to separate the transmitter of pings from the receiver of the reflected patterns.

“We achieve this by transforming a smartphone into an active sonar system that emits a combination of a sonar pulse and music and listens to the reflections off of humans in the environment. Our implementation, CovertBand, monitors minute changes to these reflections to track multiple people concurrently and to recognize different types of motion, leaking information about where people are in addition to what they may be doing.”

The researchers are straightforward about the privacy threat that this technology poses: “There are privacy leaks possible with today’s devices that go beyond the ability to simply record conversations in the home. For example, what if an attacker could remotely co-opt your television to track you as you move around, without you knowing? Further, what if that attacker could figure out what you were doing in addition to where you were? Could they even figure out if you were doing something with another person?”

The researchers have tested five different indoor environment and over thirty different moving individuals, and show that even under ideal conditions, the people typically could not detect the tracking.

“We evaluated CovertBand by running experiments in five homes in the Seattle area, showing that we can localize both single and multiple individuals through barriers. These tests show CovertBand can track walking subjects with a mean tracking error of 18 cm and subjects moving at a fixed position with an accuracy of 8 cm at up to 6 m in line-of-sight and 3 m through barriers.”

It’s conceivable that malicious apps with access to the speakers and microphone will be able to use this. It’s also conceivable that apps already are. Among many smartphone devices, the researchers also implemented their CovertBand demonstrator on a 42-inch SHARP television set.

“Even in ideal scenarios, listeners were unlikely to detect a CovertBand attack.”

Privacy remains your own responsibility.

The post Your phone can now be turned into an ultrasound sonar tracker against you and others appeared first on Privacy Online News.

September 15, 2017 at 02:38PM
via Privacy Online News http://ift.tt/2wuSPUB

What I’ve learned in five years of publishing the Privacy Surgeon

What I’ve learned in five years of publishing the Privacy Surgeon
By Simon Davies

By Simon Davies

It has been just over five years since the Privacy Surgeon began. Since then, the experience has been a rollercoaster. I wanted to share with readers the things I have learned from taking on this challenge.

I pledged back then: “These pages are devoted to promoting such tests of evidence and contrasting that body of knowledge against the hypocrisy, doublespeak, secrecy, unfairness, deception and betrayal that time and time again emerge globally as lightning rods to provoke deep anger”. I hope I have lived up to that challenge.

335 blogs and ten million site visits. I know that doesn’t sound impressive over an entire five years, but it is sort of impressive – for me at least. You can’t get away these days with publishing a credible blog unless you meet the highest standards of research and journalism – and that takes much time and effort. Like all writers, I sometimes failed at achieving that standard but I hope the vast majority of my work has been solid. 330,000 words. Gosh, I could have written five books with all that output. At least then I’d possess hard-copy birthday presents for my friends.

A few colleagues have asked me whether there were any spectacular moments throughout the site’s history. Oh yes! There have been quite a few.

The episode that springs immediately to mind unfurled in June 2013. Following the revelations of Edward Snowden, an old friend and former NSA contractor, Wayne Madsen, contacted me with news that the NSA’s activities in Europe were far more complex and widespread than we had been led to believe. He spoke in some detail about secret NSA arrangements with Germany and other countries.

I took this story to the Observer, one of Britain’s most influential and respected newspapers. The editors agreed that Madsen’s disclosure was critically important. The paper decided to run the story as its front page splash, and would give the Privacy Surgeon two hours’ publication leeway so we got onto the wires first.

For a blog site in its infancy, this deal was pure gold. Or, at least, that’s what I had foolishly imagined.

True to its word, the Observer led the paper with the Madsen story. Then everything went to pieces. The US Liberal media went into overdrive. The left hated Wayne Madsen, and within an hour of the article’s release, it made sure its condemnation of him – and the story – went viral.

Rusbridger called the Observer and demanded that it pulp the first edition and replace the splash. This act was unprecedented and caused the Observer to go into meltdown. Editors agreed that they had been hoodwinked by the Privacy Surgeon.

The Editor-in-Chief of the Observer/Guardian Newspaper Group was in the US at the time, trying to sell his financially distraught company to an American audience. Alan Rusbridger was only fresh off the plane when his phone went berserk. “This Madsen guy is a loon. He’s a conspiracy nut”. “He’s insane – always has been”.

Rusbridger called the Observer and demanded that it pulp the first edition and replace the splash. This act was unprecedented and caused the Observer to go into meltdown. Editors agreed that they had been hoodwinked by the Privacy Surgeon.

This fear was far from the truth. A week later, the respected German paper Der Spiegel, ran almost exactly the same article. It turned out that the Guardian had already cut a deal with Spiegel for the rights. I got a private apology from the newspaper, but nothing public.

Messing around with national security is a murky business, but it has to be done. Angered by the Observer debacle, I then offered a $1,000 bounty for the capture of the DNA of any spy chief. There have been precedents for such actions, including a successful 2008 bounty I ran through Privacy International for the capture of the UK Home Secretary’s fingerprints.

There were repercussions. The following month, I was speaking at a conference in Berlin and was approached by a suave guy in a three-piece suit who made small talk before adding “I would strongly advise you to remove that bounty. It’s in your best interest”. His parting shot was “None of us want anther ID card incident” (I assume he was referring to my infamous feud with UK Prime Minister Tony Blair over my campaign against the UK ID card and the subsequent media flurry over my imagined suicide because of the horrific persecution by Ministers).

No-one had a clue about this man’s identity. We did learn that he was educated at Cambridge – alma mater to the spies. I never did bother to remove the blog. Nor – despite threatening phone calls – did I remove the blog which showed UK Foreign Secretary William Hague in a rubber gimp suit. Haven’t these people heard of satire?

Actually, satire can work really well as a device. Some of the most popular blogs on here have been satirical. Sometimes, however, satire fails. In this blog I chronicled the many media enquiries and hate mail I received after publishing satirical articles. In 2013, I therefore declared, all satire became believable. I mean, seriously, things have gone bad when journalists believe a piece about a diabetic Spanish grandmother destabilising Trans-Atlantic Geopolitics.

Bloggers take heart. Powerful institutions do read what you have to say. When I ran a piece condemning Santander Bank for dumping liability onto their customers, the corporation spent a lot of time persuading me to print their meaningless response (which, in the end, I did). British Airways, likewise, went nuts over a partly satirical piece on here. I never bothered to print their reply because, in short, it was even more banal than Santander’s. However, Microsoft’s heated exchange with me over a blog critical of its terms of service warranted a full response because it was substantive in nature.

No-one had a clue about this man’s identity. We did learn that he was educated at Cambridge – alma mater to the spies. I never did bother to remove the blog. Nor – despite threatening phone calls – did I remove the blog which showed UK Foreign Secretary William Hague in a rubber gimp suit. Haven’t these people heard of satire?

Institutions sometimes take notice of what you write. When my friend Edward was detained and strip searched at Canadian border control for possession of illicit and undeclared chocolates, the resulting blog here caused much controversy and heat in that fair country. And rightly so. Other times, government agencies totally ignore exposure, such as when my colleague James was denied entry to the UK. You win some, you lose some.

Many people have asked why the Privacy Surgeon doesn’t enable readers comments. In short, it’s because the task of managing a comment facility is even greater than the task of writing the blogs. There are haters out there, and idiots. People routinely slander and defame. Yes, there are many instances where commentary is helpful, but there are many more where commentators are simply out to cause hurt or disruption. There simply aren’t enough hours in the day to manage such episodes.

All bloggers know the struggle to attract readers. You work your heart out but when you discover the global site traffic ranking you can easily become dismayed. Privacy Surgeon hovers between the top two million to three million sites in the world (out of around 1.5 billion sites). When I look at the Hunton & Williams blog, it beats us hands-down, coming in at below a million. The global law form Field Fisher does even better at 300,000. Most privacy blogs sit at around 6-12 million on the scale. There are no magic bullets to increase the ranking. To do so you’d need to spend most of your time marketing rather than writing. Still, if any blogger can achieve a few thousand dedicated readers, the effort is well worthwhile.

A huge thank you to all those people who have helped make this site possible, especially the developers Jim and Pete, dear and trusted friends who have supported me in the most creative and nurturing way that anyone could have hoped for.

September 14, 2017 at 03:55PM
via The Privacy Surgeon http://ift.tt/2juAjuv

EU Prepares Guidelines to Force Google & Facebook to Police Piracy

EU Prepares Guidelines to Force Google & Facebook to Police Piracy
By Andy

In the current climate, creators and distributors are forced to play a giant game of whac-a-mole to limit the unlicensed spread of their content on the Internet.

The way the law stands today in the United States, EU, and most other developed countries, copyright holders must wait for content to appear online before sending targeted takedown notices to hosts, service providers, and online platforms.

After sending several billion of these notices, patience is wearing thin, so a new plan is beginning to emerge. Rather than taking down content after it appears, major entertainment industry groups would prefer companies to take proactive action. The upload filters currently under discussion in Europe are a prime example but are already causing controversy.

Continuing the momentum in this direction, Reuters reports that the European Union will publish draft guidelines at the end of this month, urging platforms such as Google and Facebook to take a more proactive approach to illegal content of all kinds.

“Online platforms need to significantly step up their actions to address this problem,” the draft EU guidelines say.

“They need to be proactive in weeding out illegal content, put effective notice-and-action procedures in place, and establish well-functioning interfaces with third parties (such as trusted flaggers) and give a particular priority to notifications from national law enforcement authorities.”

On the copyright front, Google already operates interfaces designed to take down infringing content. And, as the recent agreement in the UK with copyright holders shows, is also prepared to make infringing content harder to find. Nevertheless, it will remain to be seen if Google is prepared to give even ‘trusted’ third-parties a veto on what content can appear online, without having oversight itself.

The guidelines are reportedly non-binding but further legislation in this area isn’t being ruled out for Spring 2018, if companies fail to make significant progress.

Interestingly, however, a Commission source told Reuters that any new legislation would not “change the liability exemption for online platforms.” Maintaining these so-called ‘safe harbors’ is a priority for online giants such as Google and Facebook – anything less would almost certainly be a deal-breaker.

The guidelines, due to be published at the end of September, will also encourage online platforms to publish transparency reports. These should detail the volume of notices received and actions subsequently taken. Again, Google is way ahead of the game here, having published this kind of data for the past several years.

“The guidelines also contain safeguards against excessive removal of content, such as giving its owners a right to contest such a decision,” Reuters adds.

More will be known about the proposals in a couple of weeks but it’s quite likely they’ll spark another round of debate on whether legislation is the best route to tackle illegal content or whether voluntary agreements – which have a tendency to be rather less open – should be the way to go.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

September 15, 2017 at 02:15PM
via TorrentFreak http://ift.tt/2vYHUTV

A computer tells your government you’re 91% gay. Now what?

A computer tells your government you’re 91% gay. Now what?
By Rick Falkvinge

A fascinating and horrifying new AI algorithm is able to predict your sexual orientation with 91% accuracy from five photographs of your face. According to the researchers, the human brain isn’t wired to read this data from a face, but according to these results, it is there, and an AI can detect it. This raises a bigger issue: who will have access to AI in the future, and what will they use it for?

The article in The Guardian is fascinating and scary. It describes new research that is able to predict with 91% accuracy if a man is homosexual, based on just five photographs of the face. Similarly, it has a 83% precision in predicting homosexuality in women. This makes the AI leaps and bounds better than its human counterparts, who got the responses 61% and 54% correct, respectively — more or less a coin toss, useless as a measure. The researchers describe how the human brain apparently isn’t wired to detect signs that are clearly present in the face of an individual, signs that are demonstrably detectable.

Normally, this would just be a curiosity, akin to “computer is able to detect subject’s eye color using camera equipment”. But this particular detection has very special, and severe, repercussions. In too many countries, all of which we consider underdeveloped, this particular eye color — this sexual orientation — happens to be illegal. If you were born this way, you’re criminal. Yes, it’s ridiculous. They don’t care. The punishments go all the way up to the death penalty.

So what happens when a misanthropic ruler finds this AI, and decides to run it against the passport and driver license photo databases?

What happens when the bureaucracy in such a country decides you’re 91% gay, based on an unaccountable machine, regardless of what you think?

This highlights a much bigger problem with AIs than the AIs themselves, namely, what happens when despotic governments gets access to superintelligence. It was discussed briefly on Twitter the other day, in a completely different context:

“Too many worry what Artificial Intelligence — as some independent entity — will do to humankind. Too few worry what people in power will do with Artificial Intelligence.”

Now, having a 91% indicator is not enough to convict somebody in a court of law of this “crime” in a justice system meeting any kind of reasonable standard. But it doesn’t have to be a reasonable standard.

If you want an idea of what could happen, well within the realm of horrifying possibility, consider the McCarthyism era in the United States, where anybody remotely suspected of being a communist were shut out from society: denied jobs, denied housing, denied a social context.

What would have happened if a computer of the time, based on some similar inexplicable magic, decided that a small number of people were 91% likely to be communist?

They would not have gotten housing, they would not have gotten jobs, they would lose many if not all friends. All because of some machine determined them to possibly, maybe, maybe not, probably (according to the machine builders), be in a risk group of the time.

We need to start talking about what governments are allowed to do with data like this.

Sadly, the governments which need such a discussion the most, are also the governments will which allow and heed such a discussion the least.

Privacy really remains your own responsibility.

The post A computer tells your government you’re 91% gay. Now what? appeared first on Privacy Online News.

https://platform.twitter.com/widgets.js

September 9, 2017 at 08:55PM
via Privacy Online News http://ift.tt/2whqJfz