By (Dr. Neal Krawetz)
Many of the people I work with are former military. They often tell stories that pick on other branches of the armed forces. (It really is playful banter, and very entertaining.) One time we were talking about securing a site and there was a little confusion in the discussion. As one of the ex-military people pointed out, they all interpret “secure” differently. He explained it this way:
If you tell the Marines to secure a building, they will break down in the door and move systematically room by room, floor by floor, eliminating any threats. When they are done, the building is “secured”.
If you tell the Army to secure the building, they will put a fence with razor wire around it and machine guns nests in the corners. Nobody will get close to the building; it is “secured”.
If you tell the Air Force to secure the building, they will put a man with a clipboard in front of the door. Nobody goes in or out without signing the clipboard. It is “secured”.
And if you tell the Navy to secure the building, they will renew the lease for two more years.
I had to add: if you tell a software company to secure the building, they will hang a sign with the word “Secure” on it.
You are not alone
The operating system called Ubuntu is aptly named. This African word roughly translates as “human-ness”. The interpretation is effectively “I am what I am because of who we all are.”
As names go, this is a good description for an open source operating system. The operating system didn’t create the kernel — other people created the kernel. The operating system did not create the drivers or shell; other people created those. The operating system didn’t even create the compilers, user interface, or most of the apps that are readily available — other people provided those. Instead, Canonical, Debian, RedHat and other Linux distributions pulled everything that other people provided and brought them together into a single system. Whether it is Ubuntu, Debian, RedHat, or some other Linux distribution, it is what it is because of everyone involved; it is ubuntu.
This same concept applies to computer security, but few people realize it. The Ashley-Madison (AM) compromise and disclosure was big. But lots of people think that they were not affected by it. Most people are not in the list of AM customers. In fact, nobody at your company may be listed as a client and you may not personally know anyone on the list. Because of this, companies think that the AM compromise does not impact them. However, it impacts everyone. Consider this:
- Passwords. The AM database contains over 20 million real accounts (and millions of fake accounts). That means that there are at least 20 million passwords. People are already cracking the passwords. These will eventually be incorporated into lists of common passwords for brute-force attack systems. The chances are good that someone at your company uses a similar password. The compromised AM password list may eventually be used to compromise the security at your company.
- Blackmail. The AM database dump contains enough information to blackmail people in the client list. Even if the blackmail victim is not at your company, they may impact your company. Perhaps they work for a hardware vendor and can be leveraged to insert malware into hardware that you buy. Or they could be a network provider with access to your network traffic. The attacks are not even limited to blackmail. With identity theft, they can intentionally distract an administrator long enough to successfully attack your company.
- Social Engineering. AM had millions of fake accounts, and there are reports of a successful AI system that lured millions of users into conversations. Since this technique is now public and known to be successful, it is only a matter of time before someone uses it against your users. Rather than asking for a date, they might ask for remote access, or ask you to run a program containing a virus, or just distract you from the actual attack…
- Security by Obscurity. We still don’t know how the attackers managed to compromise AM. Clearly someone knows (even if it is only the attackers). Perhaps someone can use a variant of the same attack approach to compromise your company. Security by obscurity only works if nobody knows the secret; clearly someone knows the secret.
And that is just the Ashley-Madison compromise. There’s also Home Depot, Sony, Neiman Marcus, Target, Anthem, OPM, and millions of other attacks that never get reported by the media. Even if you were not directly impacted by the compromise, every compromise has an impact on you.
There are many ways to evaluate risks. With any single compromise, there may be a tiny fraction of a risk toward you or your company. That computer virus on your friend’s aunt’s computer may have virtually zero chance of impacting your own security. But the thing to remember is that these tiny fractions add up. In 2013, an estimated 32% of all computers were infected with some kind of malware at some time, and 5% were currently infected (about 50 million computers infected). While the risk posed by a single infected system is very small, they all add up to a significant risk. Remember: we are what we are because of all of us.
Because every vulnerability makes all of us less secure, it is in everyone’s best interest to have a way for users and companies to communicate. This includes talking the same language about security risks and being open to vulnerability reports and responsible disclosure.
While this seems like a good idea, it is ignored in practice. What we, as an industry, need are:
- Clear guidelines for defining responsible disclosure,
- Clearly defined steps in the event that a company ignores the reporting,
- Acceptable ways to acknowledge a report, and
- Acceptable methods to follow-up and check the status of a report.
Moreover, these guidelines must be written in plain English (not legalese since most software engineers are not attorneys) and made freely available. The Electronic Frontier Foundation (EFF) has taken steps to address many of these issues with their “Coder’s Rights Project“. However, the EFF’s guidelines have not been widely accepted by industry.
Other options include standards like ISO 27035 (Security Incident Management), ISO 29147 (Vulnerability Disclosure), and ISO 30111 (Vulnerability Handling Processes). Unfortunately the ISO standards are not free — there is a fee to view each standard. These standards also have very technical contents. Both the fees and the overly technical content deter widespread adoption.
Instead, we have companies defining ad hoc methods for vulnerability reporting. Some companies have very easy-to-use reporting systems and offer bounties. Others just have a basic web form. Some companies want you to call or email them, while others want you to post the problem to a mailing list. The worst companies are the ones that either threaten legal action for reporting a vulnerability, are deceptive when receiving reports, or have no public method defined for receiving security issues.
For example, earlier this month, David Kriesel gave a presentation about Xerox copiers. He found that the copiers would automatically replace some characters with other characters. (The core problem is a software bug in Xerox’s JBIG/JBIG2 implementation.)
While Kriesel’s presentation covers a huge security risk, it also includes a thorough timeline of the events related to his attempts at responsible disclosure. First Xerox ignored him. Then they said it wasn’t an issue. Then the company went slimy: they kept Kriesel on the phone while releasing a press statement minimizing the risk and effectively blaming Kriesel on not reading the 300-page manual. Eventually Xerox acknowledged the problem.
Security Town Hall
Last February, the Department of Commerce (DoC) issued a formal request for comments (RFC) titled “Stakeholder Engagement on Cybersecurity in the Digital Ecosystem”. This was followed by the public release of all comments that they received. (My comments are under “Hacker Factor Solutions”. I also summarized the various replies that the DoC received.) At the time, they also announced that they would be holding a public town hall meeting.
Well, that time has arrived. The latest announcement is that the town hall meeting will be held at the end of the month: September 29, 2015 at Booth Auditorium at the University of California, Berkeley, School of Law, Boalt Hall, from 9am-3pm PDT. For people who cannot attend in person, there will be dial-in information. Be sure to pre-register. (Unless I have trouble finding parking, I’m expecting to be there in person.)
As the DoC National Telecommunications and Information Administration (NTIA) describe it:
The goal of this process will be to develop a broad, shared understanding of the overlapping interests between security researchers and the vendors and owners of products discovered to be vulnerable, and to establish a consensus about voluntary principles to promote better collaboration. The question of how vulnerabilities can and should be disclosed will be a critical part of the discussion, as will how vendors receive and respond to this information. However, disclosure is only one aspect of successful collaboration.
Keep in mind, NTIA has no ability to pass laws or regulations. Their purpose is to advise the President on telecommunications and information policy issues. Their intention is to act as a moderator and to get siloed corporations and the community to work together. If we, as an industry, can recognize the common need to communicate about vulnerabilities, set acceptable standards for responsible disclosure, and develop reasonable expectations for resolving issues, then we all benefit.
September 4, 2015 at 10:31PM
via The Hacker Factor Blog http://ift.tt/1KvOKFB