Roger R. Schellby Roger R. Schell

Not only is the effectiveness of current cybersecurity “best practices” limited, but also they enable and encourage activities inimical to privacy. Their root paradigm is a flawed reactive one appropriately described as “penetrate and patch”. Vigorous promotion encourages reliance on these flimsy best practices as a primary defense for private information. Furthermore, this paradigm is increasingly used to justify needlessly intrusive monitoring and surveillance of private information. But even worse in the long term, this misplaced reliance stifles introduction of proven and mature technology that can dramatically reduce the cyber risks to privacy.

Threat of software subversion is dire risk to privacy
Today much of the private information in the world is stored on a computer somewhere. With Internet connectivity nearly ubiquitous, it is the exception – rather than the rule – for such computers to be physically/electrically isolated, i.e., separated by an “air gap”. So, protection for privacy is no better that the cybersecurity best practices defenses employed, and their evident weakness attracts cybercriminals. Billions of dollars of damage occur each year, including identity theft with massive exposure of personal data. Clearly weak cybersecurity defenses create a serious risk to privacy.

Juan Caballero’s article in this issue notes that “At the core of most cybercrime operations is the attacker's ability to install malware on Internet-connected computers without the owner's informed consent.” U.S. Navy research demonstrates that an artifice of six lines of code can lay bare control of a commercial operating system. The Stuxnet, DuQu and Flame software subversions have recently been detailed, and a senior researcher wrote, “Put simply, attacks like these work.” I made the point myself in a 1979 article on “Computer Security: the Achilles' heel of the electronic Air Force?” where I characterized subversion as the technique of choice for professional attackers.

Best practices are not well aligned with the threat
The general response seems primarily to be a concerted push for the use of best practices, with a heavy emphasis on monitoring techniques like antivirus products and intrusion detection. For example several Silicon Valley luminaries recently presented a program with an explicit goal “To promote the use of best practices for providing security assurance”. In the litigious U.S. there have even been legislative proposals to reward those who use best practices with “immunity” to lawsuits.

Yet this fails to align with the software subversion threat. A major antivirus vendor recently said, “The truth is, consumer-grade antivirus products can’t protect against targeted malware.” A FBI senior recently concluded that the status quo is “unsustainable in that you never get ahead, never become secure, never have a reasonable expectation of privacy or security”. Similarly, an IBM keynote presenter said, “As new threats arise, we put new products in place. This is an arms race we cannot win.”

But, it is even more insidious that governments use the infirm protection of best practices as an excuse for overreaching surveillance to capture and disseminate identifiable information without a willing and knowing grant of access. They falsely imply that only increased surveillance is effective. In fact, dealing with software subversion by a determined competent adversary is more intractable than scanning a lot of Internet traffic, as Flame and StuxNet amply demonstrate.

Proven verifiable protection languishes
In contrast, the security kernel is a proven and mature technology developed in the 1970s and 1980s. Rather than reactive, security is designed in to be “effective against most internal attacks – including some that many designers never considered”. The technology was successfully applied to a number of military and commercial trusted computer platforms, primarily in North America and Europe. It was my privilege to lead some of the best minds in the world systematically codifying this experience as the “Class A1” verifiable protection in the Trusted Computer System Evaluation Criteria (TCSEC). An equivalent technical standard promulgated in Europe was known as ITSEC F-B3, E6.

Although no security is perfect, this criterion was distinguished by “substantially dealing with the problems of subversion of security mechanism”. In other words, a powerful system-level solution aligned with the threat in just the way glaringly missing from current cybersecurity best practices. Unfortunately, at that time addressing this threat was not a market priority.

Although still commercially available, the technology has fallen into disuse in the face of the expedience of the reactive paradigm. It is particularly disappointing that now at the very time ubiquitous Internet connectivity makes privacy really, really interesting, educators and industry leaders have mostly stopped even teaching that it’s possible. But today’s researchers have one of those rare tipping point opportunities to lead the way to recovery from the clear and present danger to privacy by aggressively leveraging that proven “Class A1” security kernel technology.

Roger R. Schell, President of ÆSec, founding Deputy Director of the (now) US National Computer Security Center.
He is considered as the "father" of the Trusted Computer System Evaluation Criteria (the famous "Orange Book")

{jcomments on}
Next issue: January 2019
Special theme:
Transparency in Algorithmic Decision Making
Call for the next issue
Get the latest issue to your desktop
RSS Feed