Your phone can now be turned into an ultrasound sonar tracker against you and others

Your phone can now be turned into an ultrasound sonar tracker against you and others
By Rick Falkvinge

New research shows how a mobile phone can be turned into a passive indoor ultrasound sonar, locating people with high precision indoors using multi-target echolocation, and is even able to discern a rough selection of activities. It does this by overlaying imperceptible ultrasound sonar pings into played-back music, measuring the reflections coming back to the phone’s microphone. The privacy implications are staggering.

By emitting inaudible ultrasound pings as part of normal music playback, a phone can be turned into a passive sonar device, researchers from the University of Washington show in a new paper. It can track multiple individuals at an indoor precision of 8 centimeters (3 inches), and detect different types of activity by the people in its detection zone — even through barriers, all using a normal smartphone.

People with military technology background will recognize this as next-generation passive covert radar systems, radar systems which don’t transmit, but which detect objects in the sky from changes to reflection patterns from everpresent civilian transmitters such as radio and TV towers. The primary advantage of passive covert radars is that they can’t be detected, as they only contain very sensitive receivers, no transmitters. This phone research appear to be using the same kind of technology, except it is also used as a transmitter of ultrasound pings; however, it would be trivial to separate the transmitter of pings from the receiver of the reflected patterns.

“We achieve this by transforming a smartphone into an active sonar system that emits a combination of a sonar pulse and music and listens to the reflections off of humans in the environment. Our implementation, CovertBand, monitors minute changes to these reflections to track multiple people concurrently and to recognize different types of motion, leaking information about where people are in addition to what they may be doing.”

The researchers are straightforward about the privacy threat that this technology poses: “There are privacy leaks possible with today’s devices that go beyond the ability to simply record conversations in the home. For example, what if an attacker could remotely co-opt your television to track you as you move around, without you knowing? Further, what if that attacker could figure out what you were doing in addition to where you were? Could they even figure out if you were doing something with another person?”

The researchers have tested five different indoor environment and over thirty different moving individuals, and show that even under ideal conditions, the people typically could not detect the tracking.

“We evaluated CovertBand by running experiments in five homes in the Seattle area, showing that we can localize both single and multiple individuals through barriers. These tests show CovertBand can track walking subjects with a mean tracking error of 18 cm and subjects moving at a fixed position with an accuracy of 8 cm at up to 6 m in line-of-sight and 3 m through barriers.”

It’s conceivable that malicious apps with access to the speakers and microphone will be able to use this. It’s also conceivable that apps already are. Among many smartphone devices, the researchers also implemented their CovertBand demonstrator on a 42-inch SHARP television set.

“Even in ideal scenarios, listeners were unlikely to detect a CovertBand attack.”

Privacy remains your own responsibility.

The post Your phone can now be turned into an ultrasound sonar tracker against you and others appeared first on Privacy Online News.

September 15, 2017 at 02:38PM
via Privacy Online News

What I’ve learned in five years of publishing the Privacy Surgeon

What I’ve learned in five years of publishing the Privacy Surgeon
By Simon Davies

By Simon Davies

It has been just over five years since the Privacy Surgeon began. Since then, the experience has been a rollercoaster. I wanted to share with readers the things I have learned from taking on this challenge.

I pledged back then: “These pages are devoted to promoting such tests of evidence and contrasting that body of knowledge against the hypocrisy, doublespeak, secrecy, unfairness, deception and betrayal that time and time again emerge globally as lightning rods to provoke deep anger”. I hope I have lived up to that challenge.

335 blogs and ten million site visits. I know that doesn’t sound impressive over an entire five years, but it is sort of impressive – for me at least. You can’t get away these days with publishing a credible blog unless you meet the highest standards of research and journalism – and that takes much time and effort. Like all writers, I sometimes failed at achieving that standard but I hope the vast majority of my work has been solid. 330,000 words. Gosh, I could have written five books with all that output. At least then I’d possess hard-copy birthday presents for my friends.

A few colleagues have asked me whether there were any spectacular moments throughout the site’s history. Oh yes! There have been quite a few.

The episode that springs immediately to mind unfurled in June 2013. Following the revelations of Edward Snowden, an old friend and former NSA contractor, Wayne Madsen, contacted me with news that the NSA’s activities in Europe were far more complex and widespread than we had been led to believe. He spoke in some detail about secret NSA arrangements with Germany and other countries.

I took this story to the Observer, one of Britain’s most influential and respected newspapers. The editors agreed that Madsen’s disclosure was critically important. The paper decided to run the story as its front page splash, and would give the Privacy Surgeon two hours’ publication leeway so we got onto the wires first.

For a blog site in its infancy, this deal was pure gold. Or, at least, that’s what I had foolishly imagined.

True to its word, the Observer led the paper with the Madsen story. Then everything went to pieces. The US Liberal media went into overdrive. The left hated Wayne Madsen, and within an hour of the article’s release, it made sure its condemnation of him – and the story – went viral.

Rusbridger called the Observer and demanded that it pulp the first edition and replace the splash. This act was unprecedented and caused the Observer to go into meltdown. Editors agreed that they had been hoodwinked by the Privacy Surgeon.

The Editor-in-Chief of the Observer/Guardian Newspaper Group was in the US at the time, trying to sell his financially distraught company to an American audience. Alan Rusbridger was only fresh off the plane when his phone went berserk. “This Madsen guy is a loon. He’s a conspiracy nut”. “He’s insane – always has been”.

Rusbridger called the Observer and demanded that it pulp the first edition and replace the splash. This act was unprecedented and caused the Observer to go into meltdown. Editors agreed that they had been hoodwinked by the Privacy Surgeon.

This fear was far from the truth. A week later, the respected German paper Der Spiegel, ran almost exactly the same article. It turned out that the Guardian had already cut a deal with Spiegel for the rights. I got a private apology from the newspaper, but nothing public.

Messing around with national security is a murky business, but it has to be done. Angered by the Observer debacle, I then offered a $1,000 bounty for the capture of the DNA of any spy chief. There have been precedents for such actions, including a successful 2008 bounty I ran through Privacy International for the capture of the UK Home Secretary’s fingerprints.

There were repercussions. The following month, I was speaking at a conference in Berlin and was approached by a suave guy in a three-piece suit who made small talk before adding “I would strongly advise you to remove that bounty. It’s in your best interest”. His parting shot was “None of us want anther ID card incident” (I assume he was referring to my infamous feud with UK Prime Minister Tony Blair over my campaign against the UK ID card and the subsequent media flurry over my imagined suicide because of the horrific persecution by Ministers).

No-one had a clue about this man’s identity. We did learn that he was educated at Cambridge – alma mater to the spies. I never did bother to remove the blog. Nor – despite threatening phone calls – did I remove the blog which showed UK Foreign Secretary William Hague in a rubber gimp suit. Haven’t these people heard of satire?

Actually, satire can work really well as a device. Some of the most popular blogs on here have been satirical. Sometimes, however, satire fails. In this blog I chronicled the many media enquiries and hate mail I received after publishing satirical articles. In 2013, I therefore declared, all satire became believable. I mean, seriously, things have gone bad when journalists believe a piece about a diabetic Spanish grandmother destabilising Trans-Atlantic Geopolitics.

Bloggers take heart. Powerful institutions do read what you have to say. When I ran a piece condemning Santander Bank for dumping liability onto their customers, the corporation spent a lot of time persuading me to print their meaningless response (which, in the end, I did). British Airways, likewise, went nuts over a partly satirical piece on here. I never bothered to print their reply because, in short, it was even more banal than Santander’s. However, Microsoft’s heated exchange with me over a blog critical of its terms of service warranted a full response because it was substantive in nature.

No-one had a clue about this man’s identity. We did learn that he was educated at Cambridge – alma mater to the spies. I never did bother to remove the blog. Nor – despite threatening phone calls – did I remove the blog which showed UK Foreign Secretary William Hague in a rubber gimp suit. Haven’t these people heard of satire?

Institutions sometimes take notice of what you write. When my friend Edward was detained and strip searched at Canadian border control for possession of illicit and undeclared chocolates, the resulting blog here caused much controversy and heat in that fair country. And rightly so. Other times, government agencies totally ignore exposure, such as when my colleague James was denied entry to the UK. You win some, you lose some.

Many people have asked why the Privacy Surgeon doesn’t enable readers comments. In short, it’s because the task of managing a comment facility is even greater than the task of writing the blogs. There are haters out there, and idiots. People routinely slander and defame. Yes, there are many instances where commentary is helpful, but there are many more where commentators are simply out to cause hurt or disruption. There simply aren’t enough hours in the day to manage such episodes.

All bloggers know the struggle to attract readers. You work your heart out but when you discover the global site traffic ranking you can easily become dismayed. Privacy Surgeon hovers between the top two million to three million sites in the world (out of around 1.5 billion sites). When I look at the Hunton & Williams blog, it beats us hands-down, coming in at below a million. The global law form Field Fisher does even better at 300,000. Most privacy blogs sit at around 6-12 million on the scale. There are no magic bullets to increase the ranking. To do so you’d need to spend most of your time marketing rather than writing. Still, if any blogger can achieve a few thousand dedicated readers, the effort is well worthwhile.

A huge thank you to all those people who have helped make this site possible, especially the developers Jim and Pete, dear and trusted friends who have supported me in the most creative and nurturing way that anyone could have hoped for.

September 14, 2017 at 03:55PM
via The Privacy Surgeon

EU Prepares Guidelines to Force Google & Facebook to Police Piracy

EU Prepares Guidelines to Force Google & Facebook to Police Piracy
By Andy

In the current climate, creators and distributors are forced to play a giant game of whac-a-mole to limit the unlicensed spread of their content on the Internet.

The way the law stands today in the United States, EU, and most other developed countries, copyright holders must wait for content to appear online before sending targeted takedown notices to hosts, service providers, and online platforms.

After sending several billion of these notices, patience is wearing thin, so a new plan is beginning to emerge. Rather than taking down content after it appears, major entertainment industry groups would prefer companies to take proactive action. The upload filters currently under discussion in Europe are a prime example but are already causing controversy.

Continuing the momentum in this direction, Reuters reports that the European Union will publish draft guidelines at the end of this month, urging platforms such as Google and Facebook to take a more proactive approach to illegal content of all kinds.

“Online platforms need to significantly step up their actions to address this problem,” the draft EU guidelines say.

“They need to be proactive in weeding out illegal content, put effective notice-and-action procedures in place, and establish well-functioning interfaces with third parties (such as trusted flaggers) and give a particular priority to notifications from national law enforcement authorities.”

On the copyright front, Google already operates interfaces designed to take down infringing content. And, as the recent agreement in the UK with copyright holders shows, is also prepared to make infringing content harder to find. Nevertheless, it will remain to be seen if Google is prepared to give even ‘trusted’ third-parties a veto on what content can appear online, without having oversight itself.

The guidelines are reportedly non-binding but further legislation in this area isn’t being ruled out for Spring 2018, if companies fail to make significant progress.

Interestingly, however, a Commission source told Reuters that any new legislation would not “change the liability exemption for online platforms.” Maintaining these so-called ‘safe harbors’ is a priority for online giants such as Google and Facebook – anything less would almost certainly be a deal-breaker.

The guidelines, due to be published at the end of September, will also encourage online platforms to publish transparency reports. These should detail the volume of notices received and actions subsequently taken. Again, Google is way ahead of the game here, having published this kind of data for the past several years.

“The guidelines also contain safeguards against excessive removal of content, such as giving its owners a right to contest such a decision,” Reuters adds.

More will be known about the proposals in a couple of weeks but it’s quite likely they’ll spark another round of debate on whether legislation is the best route to tackle illegal content or whether voluntary agreements – which have a tendency to be rather less open – should be the way to go.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

September 15, 2017 at 02:15PM
via TorrentFreak

A computer tells your government you’re 91% gay. Now what?

A computer tells your government you’re 91% gay. Now what?
By Rick Falkvinge

A fascinating and horrifying new AI algorithm is able to predict your sexual orientation with 91% accuracy from five photographs of your face. According to the researchers, the human brain isn’t wired to read this data from a face, but according to these results, it is there, and an AI can detect it. This raises a bigger issue: who will have access to AI in the future, and what will they use it for?

The article in The Guardian is fascinating and scary. It describes new research that is able to predict with 91% accuracy if a man is homosexual, based on just five photographs of the face. Similarly, it has a 83% precision in predicting homosexuality in women. This makes the AI leaps and bounds better than its human counterparts, who got the responses 61% and 54% correct, respectively — more or less a coin toss, useless as a measure. The researchers describe how the human brain apparently isn’t wired to detect signs that are clearly present in the face of an individual, signs that are demonstrably detectable.

Normally, this would just be a curiosity, akin to “computer is able to detect subject’s eye color using camera equipment”. But this particular detection has very special, and severe, repercussions. In too many countries, all of which we consider underdeveloped, this particular eye color — this sexual orientation — happens to be illegal. If you were born this way, you’re criminal. Yes, it’s ridiculous. They don’t care. The punishments go all the way up to the death penalty.

So what happens when a misanthropic ruler finds this AI, and decides to run it against the passport and driver license photo databases?

What happens when the bureaucracy in such a country decides you’re 91% gay, based on an unaccountable machine, regardless of what you think?

This highlights a much bigger problem with AIs than the AIs themselves, namely, what happens when despotic governments gets access to superintelligence. It was discussed briefly on Twitter the other day, in a completely different context:

“Too many worry what Artificial Intelligence — as some independent entity — will do to humankind. Too few worry what people in power will do with Artificial Intelligence.”

Now, having a 91% indicator is not enough to convict somebody in a court of law of this “crime” in a justice system meeting any kind of reasonable standard. But it doesn’t have to be a reasonable standard.

If you want an idea of what could happen, well within the realm of horrifying possibility, consider the McCarthyism era in the United States, where anybody remotely suspected of being a communist were shut out from society: denied jobs, denied housing, denied a social context.

What would have happened if a computer of the time, based on some similar inexplicable magic, decided that a small number of people were 91% likely to be communist?

They would not have gotten housing, they would not have gotten jobs, they would lose many if not all friends. All because of some machine determined them to possibly, maybe, maybe not, probably (according to the machine builders), be in a risk group of the time.

We need to start talking about what governments are allowed to do with data like this.

Sadly, the governments which need such a discussion the most, are also the governments will which allow and heed such a discussion the least.

Privacy really remains your own responsibility.

The post A computer tells your government you’re 91% gay. Now what? appeared first on Privacy Online News.

September 9, 2017 at 08:55PM
via Privacy Online News

India’s Supreme Court Upholds Right to Privacy as a Fundamental Right—and It’s About Time

India’s Supreme Court Upholds Right to Privacy as a Fundamental Right—and It’s About Time
By jmalcolm

Last week’s unanimous judgment by the Supreme Court of India (SCI) in Justice K.S. Puttaswamy (Retd) vs Union of India is a resounding victory for privacy. The ruling is the outcome of a petition challenging the constitutional validity of the Indian biometric identity scheme Aadhaar. The judgment’s ringing endorsement of the right to privacy as a fundamental right marks a watershed moment in the constitutional history of India. The one-page order signed by all nine judges declares:

The right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.

The right to privacy in India has developed through a series of decisions over the past 60 years. Over the years, inconsistency from two early judgments created a divergence of opinion on whether the right to privacy is a fundamental right. Last week’s judgment reconciles those different interpretations to unequivocally declare that it is. Moreover, constitutional provisions must be read and interpreted in a manner which would enhance their conformity with international human rights instruments ratified by India. The judgment also concludes that privacy is a necessary condition for the meaningful exercise of other guaranteed freedoms.

The judgment, in which the judges state the reasons behind the one-page order, spans 547 pages and includes opinions from six judges, creating a legal framework for privacy protections in India. The opinions cover a wide range of issues in clarifying that privacy is a fundamental inalienable right, intrinsic to human dignity and liberty.

The decision is especially timely given the rapid roll-out of Aahaar. In fact, the privacy ruling arose from a pending challenge to India’s biometric identity scheme. We have previously covered the privacy and surveillance risks associated with that scheme. Ambiguity on the nature and scope of privacy as a right in India allowed the government to collect and compile both demographic and biometric data of residents. The original justification for introducing Aadhaar was to ensure government benefits reached the intended recipients. Following a rapid roll-out and expansion, it is the largest biometric database in the world, with over 1.25 billion Indians registered. The government’s push for Aadhaar has led to its wide acceptance as proof of identity, and as an instrument for restructuring and facilitating government services.

The Two Cases That Casted Doubts on the Right to Privacy

In 2012, Justice K.S. Puttaswamy (Retired) filed a petition in the Supreme Court challenging the constitutionality of Aadhaar on the grounds that it violates the right to privacy. During the hearings, the Central government opposed the classification of privacy as a fundamental right. The government’s opposition to the right relied on two early decisions—MP Sharma vs Satish Chandra in 1954, and Kharak Singh vs State of Uttar Pradesh in 1962—which had held that privacy was not a fundamental right.

In M.P Sharma, the bench held that the drafters of the Constitution did not intend to subject the power of search and seizure to a fundamental right of privacy. They argued that the Indian Constitution does not include any language similar to the Fourth Amendment of the US Constitution, and therefore, questioned the existence of a protected right to privacy. The Supreme Court made clear that M.P Sharma did not decide other questions, such as “whether a constitutional right to privacy is protected by other provisions contained in the fundamental rights including among them, the right to life and personal liberty under Article 21.”

In Kharak Singh, the decision invalidated a Police Regulation that provided for nightly domiciliary visits, calling them an “unauthorized intrusion into a person’s home and a violation of ordered liberty.” However, it also upheld other clauses of the Regulation on the ground that the right of privacy was not guaranteed under the Constitution, and hence Article 21 of the Indian Constitution (the right to life and personal liberty) had no application. Justice Subbarao’s dissenting opinion clarified that, although the right to privacy was not expressly recognized as a fundamental right, it was an essential ingredient of personal liberty under Article 21.

Over the next 40 years, the interpretation and scope of privacy as a right expanded, and was accepted as being constitutional in subsequent judgments. During the hearings of the Aadhaar challenge, the Attorney-General (AG) representing the Union of India questioned the foundations of the right to privacy. The AG argued that the Constitution’s framers never intended to incorporate a right to privacy, and therefore, to read such a right as intrinsic to the right to life and personal liberty under Article 21, or to the rights to various freedoms (such as the freedom of expression) guaranteed under Article 19, would amount to rewriting the Constitution. The government also pleaded that privacy was “too amorphous” for a precise definition and an elitist concept which should not be elevated to that of a fundamental right.

The AG based his claims on the M.P. Sharma and Kharak Singh judgments, arguing that since a larger bench had found privacy was not a fundamental right, subsequent smaller benches upholding the right were not applicable. Sensing the need for reconciliation of the divergence of opinions on privacy, the Court referred this technical clarification on constitutionality of the right to a larger bench. The bench would determine whether the reasoning applied in M.P. Sharma and Kharak Singh were correct and still relevant in present day. The bench was set up not to not look into the constitutional validity of Aadhaar, but to consider a much larger question: whether right to privacy is a fundamental right and can be traced in the rights to life and personal liberty.

Aadhaar in jeopardy? Not Quite Yet

Given the government’s aggressive defense of Aadhaar, many human rights defenders feared the worst. The steady expansion of the scheme and the delay over the nine-judge bench being formed allowed Aadhaar to become an insidious part of Indian citizens’ life. Indeed, in many ways the delay has led to Aadhaar being linked to all manner of essential and nonessential services. In last week’s 547-page judgment, the Court is clear about the fundamental right to privacy and has overruled these two past judgments insofar as their observations on privacy were concerned. The constitutional framework for privacy clarified last week by the Court will breathe life into the Aadhaar hearings.

While it awaited clarification on the right to privacy, the bench hearing the constitutional challenge to Aadhaar passed an interim order restricting compulsory linking of Aadhaar for benefits delivery. The order ends the legal gridlock in the hearings on the validity of the scheme. The identification database that Aadhaar builds will not be easy to reconcile in the framework for privacy drawn up in the judgments. Legal experts are of the opinion that, following the judgment, “it is amply clear that Aadhaar shall have to meet the challenge of privacy as a fundamental right.”

The Aadhaar hearings, which were cut short, are expected to resume under a smaller three- or five-judge bench later this month. Outside of the pending Aadhaar challenge, the ruling can also form the basis of new legal challenges to the architecture and implementation of Aadhaar. For example, with growing evidence that state governments are already using Aadhaar to build databases to profile citizens, the security of data and limitations on data convergence and profiling may be areas for future privacy-related challenges to Aadhaar.

Implications for Future Case and Statute Law

The lead judgment calls for the government to create a data protection regime to protect the privacy of the individual. It recommends a robust regime which balances individual interests and legitimate concerns of the state. Justice Chandrachud notes, “Formulation of a regime for data protection is a complex exercise that needs to be undertaken by the state after a careful balancing of requirements of privacy coupled with other values which the protection of data subserves together with the legitimate concerns of the state.” For example, the court observes, “government could mine data to ensure resources reached intended beneficiaries.” However, the bench restrains itself from providing guidance on the issues, confining its opinion to the clarification of the constitutionality of the right to privacy.

The judgment will also have ramifications for a number of contemporary issues pending before the supreme court. In particular, two proceedings—on Aadhaar and on WhatsApp-Facebook data sharing—will be test grounds for the application and contours of the right to privacy in India. For now, what is certain is that the right to privacy has been unequivocally articulated by the highest Court. There is much reason to celebrate this long-due victory for privacy rights in India. But it is only the first step, as the real test of the strength of the right will in how it is understood and applied in subsequent challenges.

August 28, 2017 at 11:35PM
via Deeplinks

Game of Thrones meets the world of #privacy. See who our poll reveals as privacy’s GoT nasties and heroes.

Game of Thrones meets the world of #privacy. See who our poll reveals as privacy’s GoT nasties and heroes.
By Simon Davies

Game of Thrones is filled with intrigue, deception, redemption, conflict, heroism and deal-cutting. That sounds scarily like the world of privacy. So – we wondered – who are characters in the privacy world that most resemble those in Game of Thrones?

The Privacy Surgeon recently published a poll to ask this very question. To discover who won the role of Tyrion Lannister, the evil Ramsay Bolton and others, check out the list below. The winners may surprise you.

Varys Jules > Jules Polonetsky

If Varys was around today, he would probably have as many LinkedIn connections as Jules. And that’s a lot. The co-leader of the Future of Privacy Forum has an unprecedented web of Little Birds across the privacy world and beyond, making him a valuable consort to industry.

The Night’s King > Eric Schmidt

Both characters are seemingly unstoppable forces, the only difference being that Schmidt’s army can probably walk on water. Or at least, it can monetise the water and change its conditions. Google is the White Walkers of privacy, devastating the fortresses that have been built to protect information rights. Only the Dragon Glass of EU law and the FTC will stop them.

Arya Stark > Gus Hosein

You don’t mess with either Arya or Gus. Both give the appearance of being innocent and gentle, but wrath is certain upon any wrong-doer. It’s not clear whether Gus, who runs the leading campaign group Privacy International, has a “list”, but if he does, you wouldn’t want to be on it.

Tyrion Lannister > Marc Rotenberg

Intellectually honest, fiercely strategic and deeply intuitive, Tyrion and EPIC’s president are a force to be reckoned with. Always counselling diplomacy and due process, Marc could well be the most influential figure in at least four of the seven privacy kingdoms. Sure, he’s three feet taller than Tyrion, but we can’t hold that against him.

Samuel Tarley > Michael Froomkin

If you ever wanted to discover the fine details of strategy or magic, you’d go to Sam. If you want to discover the intricacies of privacy, you would go to Michael. This University of Miami professor has a reputation for laying bare the dynamics of the privacy world.

The High Sparrow > Joe Cannataci

Both the High Sparrow and the UN Special Rapporteur on Privacy received a mandate from the gods to do good on earth. In the end, both failed to make the grade, though Cannataci is likely to exit in a less spectacular way than the High Sparrow. Or will he?

Jaime Lannister > Brad Smith

Both characters have a past, and both have strived to gain some redemption from it.Smith, Microsoft’s President, now presides over a corporation that was once – root and branch – the Evil Empire. Now, battling the US DoJ over customer privacy, he has gained a horde of new recruits. Still, like Lannister, there are many who despise his business model and lay in wait to overthrow Smith’s armies. He needs more reinforcements and more privacy strategy.

Petyr Baelish > Peter Thiel

Let’s face it, Lord Baelish – Littlefinger – is an odious sort of character. He’s not as powerful as other figures, but he gets his sticky tentacles everywhere and poisons whatever he can. This is like what the policing technology group Palantir does to privacy. Look carefully and you’ll see Baelish’s smirk mirrored on Thiel’s face.

Daenerys Targaryen > Isabelle Falque-Pierrotin

World domination: that’s the phrase that springs to mind with these two. Daenyris used dragons; Isabelle uses her political nous. The president of France’s national watchdog CNIL soon became Chair of the Article 29 group of EU commissioners. Before long she will doubtless sit on the Iron Throne of the seven privacy kingdoms. Or, at least, three of them.

Hodor > Phil Booth

Physically, Hodor and Phil are both gentle and loyal giants, but (voice skills aside), the similarities don’t end there. Phil Booth of the UK’s MedConfidential NGO (and formerly head of the anti-ID group NO2ID) is a Hodor of Britain’s privacy world, always there to hold the door against invading forces.

Ramsay Bolton > Mark Zuckerberg

This match comes as no surprise. When it comes to torturing and subjugating the world of privacy, nothing does it more effectively than Facebook. Well, the Night’s King might just beat it, but that role was given to Google. Anyway, if you can imagine privacy being tied naked to a wall, you can just see Zuckerberg doing the business equivalent of flaying and dismembering it. Does Mark have Ramsay’s eyes, or is that just our imagination?

Davos Seaworth > Joe McNamee

Ah yes, sometimes there is almost total love and respect for a Game of Thrones character (well, we do have Bran, but there’s no-one in the privacy world with the gift of a three eyed Raven). But Davos is a pillar of rights and dignity and like him, Joe McNamee’s European Digital Rights (EDRi) has become the most effective and trusted rights organisation in Europe. He’s universally respected guy, but do not underestimate him in a clash!

August 29, 2017 at 12:07PM
via The Privacy Surgeon

Student Privacy Tips for Students

Student Privacy Tips for Students
By gennie

Students: As you get ready to go back to school, add “review your student privacy rights” to your back-to-school to-do list, right next to ordering books and buying supplies. Exciting new technology in the classroom can also mean privacy violations, including the chance that your personal devices and online accounts may be demanded for searches by school personnel.

Our student privacy report offers recommendations for several stakeholder groups. In this post, we’ll focus specifically on students. Given that the integration of technology in education affects their data personally, it’s vital that students are especially attentive to what’s being integrated into their curriculum. Below, we provide a few recommendations for students to act to preserve their personal data privacy:

  • Determine if there are privacy settings you can control directly in the device or application.
  • Try to ascertain the privacy practices of the ed tech providers your school uses.
  • Avoid sharing sensitive personal information (which could include, for example, search terms and browser history) if it will be transmitted back to the provider.
  • If you’re concerned by the usage of a certain service and find it intrusive, talk to your parents and explain why you find it concerning.
  • Ask to opt out or use an alternative technology when you do not feel comfortable with the policies of certain vendors.
  • Share your privacy concerns with school administrators. It may work best to gather a few like-minded students and have a joint meeting where everyone shares their concerns and asks the school administrator(s) for further guidance.

Want to learn more? Read our report Spying on Students: School-Issued Devices and Student Privacy for more recommendations, analysis of student privacy law, and case studies from across the country.

August 25, 2017 at 09:16PM
via Deeplinks