Featured post

Cyberspace and Security 3rd Edition

We developed our cyber defenses, largely based on variationsof a physical firewall. It does not work, and never did; it has been mathematically proven that any firewall can be penetrated.
Cyberspace is fundamentally different from physical space.

Cyberspace is an Information and Communications space.

In cyberspace information cannot be destroyed; only a copy of information can be destroyed.

Physical space is limited; cyberspace is unlimited.

Physical space has three dimensions; Cyberspace has unlimited dimensions.

In physical space visibility is unavoidable; in cyberspace visibility is optional.

In physical space only one identity is permitted; cyberspace permits multiple identities.

Spycraft101 Apple podcast episode #97

https://podcasts.apple.com/us/podcast/97-defecting-from-the-ussr-with-olga-sheymov/id1567302778?i=1000614816892

This week, Justin chats with Olga Sheymov. Olga has worked in high technology, on arts projects, and as a television producer. Her TV credits include the Long-Running series Russia Today, produced from 1997 until 2015, and Your Source TV among other projects. But long before she began her media career, Olga and her husband Victor Sheymov defected to the US and were smuggled out of the Soviet Union and into the Carpathian Mountains by a team from the Central Intelligence Agency in 1980. Victor was a high ranking member of the KGB and proved to be an incredibly valuable source of information for the US government for years to come, although their relationship with the CIA and FBI encountered many problems, to say the least.

Spycraft101 Apple podcast episode #97

Power grid: when cyber lines cros

We have very little time to cure our stone age cyber defensive technology. But that requires changing the current equation and making cyber defense inherently more powerful that the offense.

The CNN story citing testimony by Admiral Michael Rogers, head of U.S. Cyber Command, to a House Select Intelligence Committee November 20 sounded like shocking news. He stated that China can take down our power grid. http://www.cnn.com/2014/11/20/politics/nsa-china-power-grid/index.html

Shocking as it may be, if this is still “news,” surprise, surprise — it’s been known to everyone who was anyone in cyber security for over 25 years. First it was just the Russians, then the Chinese, then some vague criminals acting on behalf of “nation-states” were gradually added to the list.
Never mind the Russians and the Chinese – they also both have enough nuclear weapons to kill every squirrel in America. What is really troubling is the cyber security trend. Our cyber defensive capabilities have hardly improved for over a quarter-century. However, hackers’ attacking capabilities are improving constantly and dramatically. This is not a good equation — sooner or later these lines will cross. This means that a large number of unknown hackers will be able to take down our power grid and also decimate our power-intensive facilities, such as oil refineries, gas distribution stations, and chemical factories.
Now, think terrorists. They would be delighted to do exactly that, whether you kill them afterwards or not. This isn’t news, but it’s an increasingly troubling reality. We have very little time to cure our stone age cyber defensive technology. But that requires changing the current equation and making cyber defense inherently more powerful that the offense. That won’t happen until the doomed legacy password and firewall paradigms are abandoned and replaced by fundamentally different technologies.

First Recorded Breach of Security

A standard first dictionary definition of security is freedom from danger. Danger or threat, as it is often labeled, has to be present or assumed to be present otherwise there is no need for security. In recent years, threat has conventionally been defined by security professionals as the sum of the opposition’s capability, intent (will), and opportunity, and can be expressed thus:

Threat = Capability + Intent (will) + Opportunity

Indeed, without a capability, an attack cannot take place. An
attacker must possess a specific capability for a specific attack. For instance, the Afghan Taliban cannot carry out a nuclear missile attack on the United States even if they have full intent and an opportunity. Intent or will is also a necessary ingredient. North Korea has the capability for a nuclear strike on South Korea, but many factors keep their will in check. Similarly, Iran may have a capability to attack a defenseless American recreational sailboat in its territorial waters, and be perfectly
willing to do so, but American recreational sailors just do not go there, providing no opportunity.

Furthermore, applying this formula usually does not produce precise results since ingredients such as capability and opportunity are usually not known exactly and often are just assumed. A classic example of this is the infamous case of Iraq’s weapons of mass destruction as justification of the last Iraq war.

The first recorded breach of security occurred in the Garden of
Eden. Apparently, there was a sense of threat, and Cherubim guarding it with flaming swords were the security measures taken. However, the security measures were insufficient, and that allowed the serpent to infiltrate the Garden of Eden and do his ungodly deed.
In fact, there is no perfect security. We can only provide degrees of protection (i.e., if there is a threat, risk is always present, though its level may vary. Often this is reflected in the statement that risk is a combination of threat and vulnerability:

Risk = Threat + Vulnerability

This looks logical since vulnerability means exposure to a certain
threat. This also leads to the assertion that:
Vulnerability is a deficiency of protection against a specific
attack.
A reasonably comprehensive definition of security would probably
be something like:
A set of measures that eliminate, or at least alleviate the
probability of destruction, theft, or damage to a being, an object, a process, or data, including the revelation of a process, or of the
content of information.

A hack is forever, but so are its fingerprints

A while ago I published a post, “A hack is forever,” explaining that a competent hack is extremely difficult to eradicate from a cyber system – there is no certainty that the system is really clean. However, there is flip-side aspect of this in cyberspace that is not commonly understood: by way of dubious consolation, the hacker cannot be certain that he really got away with the crime.
Cyberspace is an information and communications space. In essence, we don’t really care what media our information is stored on, we care about the utility aspects, such as the efficiency of the storage, how quickly and conveniently information can be retrieved, and so on. Similarly, we don’t really care what communications channels we use, we care about our communications’ speed, reliability, security, etc.
Cyberspace has one very significant property: information cannot be destroyed. Just think about it. We can destroy only a copy of the information, i.e. we can destroy a physical carrier of the information, such as a floppy, a thumb drive, a letter, or a hard drive (often that’s not an easy task either). However, we cannot destroy the information itself. It can exist in an unknown number of copies, and we never really know how many copies of a particular piece of information exist. This is particularly true given the increasing complexity of our cyber systems — we never really know how many copies of that information were made during its processing, storage, and communication, not to mention a very possible intercept of the information by whoever, given our insecure Internet. Such an intercept can open an entirely separate and potentially huge area in cyberspace for numerous further copies of the information.
Back to the consolation point: cyber criminals of all sorts can never be sure that there is not a copy somewhere of whatever they have done. That copy can surface, someday, usually at the most inopportune time for the perpetrator.
One practical aspect of this is that Congress perhaps should consider increasing the statutes of limitations for cyber crimes, or crimes committed via cyberspace.

Don’t blame the victim; fix the cyber technology

The perennial excuse for our dismal performance in cyber security keeps showing up again and again. Some “experts” state that 95% of cyber security breaches occur due to human error, i.e. not following the recommended procedures. There’s a sleight of hand in these statements in that many breaches include human error, but do not occur due to that error. While the 95% number might be suspect, the real point is different: even following all the “recommended security procedures” will not protect our systems from cyber attacks.
It’s true that attackers often use users’ mistakes. But the reason is simple and obvious – human errors do make it easier to penetrate a system. In effect they represent a shortcut for an attack, but by no means do they eliminate many other ways to do it. Why would an attacker take a more complicated route if he can use a shortcut?
Of course, users’ awareness of security is not common or comprehensive. This was vividly demonstrated by one very important Government agency not that long ago. Its board, after a thorough (and expensive) “expert” study mandated that employees use a six-letter password instead of the old and “insecure” four-letter one.
This is a pretty pathetic solution, but the much bigger question is: do the users really need to follow or even know complicated procedures? The answer is: no, not at all.
Indeed, cyberspace presents us with a wonderful opportunity to build very user-friendly effective security systems. It’s quite possible to build cyber security systems that would be extremely strong, even mathematically unhackable, that would require the user only to select the party he is going to communicate with, and then to indicate “secure.” No other security-related actions would be needed. This is very different from our current security technology based on concepts of physical space, where the weakest link in the security chain is the human factor. But up until now we have failed to take advantage of this great property of cyberspace.
If, as it is claimed, our cyber security misery is a “people” problem, this is true only in very narrow sense. It’s not the users who are the problem; the problem belongs to the people who design and build our worthless cyber security systems.

The Attack on Private Encryption

The current anti-encryption political push by a choir of government bureaucrats is picking up steam and has lately been joined by the head of the British MI5. The usual scarecrow of terrorism is invoked and used bluntly in public statements that border on unabashed propaganda. I did not want to write about it, but what is going on is just too much to take. The real goal of the whole campaign is suspect, so it’s worth taking a closer look at the issues involved.
Point one – ideological: We view ourselves as a democracy. With that in mind we need to understand that encryption has existed for at least four thousand years. During that time most of the rulers were ruthless tyrants and for all of them their #1 priority was to protect their rule. But even they did not crack down on private encryption – because it’s not practical (see Point three below), and they could not enforce it anyway. We, on the other hand, are facing a bunch of bureaucrats demanding the practical end to meaningful private encryption. How come a democracy can have more restrictions for its citizens than a country suffering under the rule of a tyrant?
Point two – technical: During all these thousands of years encryption algorithms have been consistently and quickly cracked by experts, usually employed by the government. Only a very few encryption algorithms withstood scrutiny for a few years, and those strong algorithms were developed by government experts and have always been well outside the reach of the general public at the time. Contrary to popular belief, all commercially available algorithms have been cracked very quickly after their introduction. Governments have traditionally been very shy about disclosing this. The situation is no different now. If a target used commercially available encryption algorithms its communications have been quickly cracked. So, what is the technical difference in the current situation? The simple answer is the sheer volume of information passing through the Internet. Individual communications can be cracked, but not the entire Internet traffic. That’s what the government bureaucracy is after: the ability to read ALL the traffic, i.e. all of our communications.
Point three – practical: the purpose of encryption is to assure privacy of communications. There are many other ways to do this other than by encryption. One vivid example: when we were hunting Bin Laden for years he did not use the Internet at all, he used messengers. He could just as well have used the regular mail. Furthermore, it’s well known that the 9/11 terrorists were communicating over regular phones, but in Aesopian language. For example they referred to a terrorist act as a “wedding.” So are our bureaucrats next going to demand the right to read all our mail, or make a terror suspect of anyone who mentions a wedding over the phone?
Conclusion: The simple truth is that the Government can penetrate any commercial encryption available to terrorists. That is if they actually go after terrorists. However, they are now demanding the right to go after everyone, mostly law abiding citizens. If that demand is denied there’s still nothing to prevent them from going specifically after terror suspects.
The moral here is pretty straightforward: if we call ourselves an uncorrupt democracy we should be very careful about giving our bureaucrats too much power, inasmuch as they want more power than tyrants of history could not get. Furthermore, the bigger danger here is that loosing civil rights is a very slippery slope.

The secret reason behind the Chinese hacking

For quite some time I’ve been puzzled by the alleged Chinese hacking of our databases. I could understand if they hacked our advanced research and development– that would save them time, effort and money. But why the databases? Then it dawned on me: it’s a savvy business strategy.
We routinely encounter problems with our databases. One organization can’t find our file, another somehow has the wrong information about us, and all too often they certainly can’t get their act together, and we see classic cases of the left hand not knowing what the right one is doing . The pre-9/11 non-sharing of intelligence is a good illustration. In other words, we have a somewhat messy general situation with our databases; we’re used to taking this in stride; and we just sigh when we have to deal with some organization that accuses us of something we aren’t guilty of.
The Chinese understood the problem, but they just never got used to it. For many centuries they had a much bigger population than other countries, but somehow they always managed to know exactly who is who, who is related to whom, and what he/she is doing.
So naturally they wanted to have the same level of knowledge about the rest of the world. To their dismay, in the US they found disorganized databases and mismatching records. So they had to process all that information to make sense of it for themselves. And suddenly they saw a perfect business opportunity: they would develop a gigantic and very efficient database of the US, and then sell this data back to us piecemeal, retail. This would give them full and exact knowledge of the US, and the US would pay for the project, with a significant profit for the Chinese. For us this would be a very valuable service, a kind of of involuntary outsourcing where we (both the Government and the private sector) can get relevant and reliable data at a modest price. Makes perfect business sense.
This approach has a special bonus for the US Government: when buying data abroad they won’t have to deal with privacy restrictions imposed by the US Constitution and constantly debated by Congress. The logic is impeccable: we bought it abroad, and if the Chinese know it, we are entitled to know what they know about us.

The Android phone vulnerability has been “fixed” – really? How about Android Pay and Google Wallet?

The recently discovered vulnerability in the Android operation system that affected 1 billion smartphone users (corrected to a mere 950 million, according to the phone manufacturers) followed a typical path:
a) The next gaping security hole is discovered by researchers, who alert the manufacturers;
b) The manufacturers make a patch for future buyers;
c) The manufacturers and service providers do nothing to help or even alert the affected users;
d) The researchers lose patience and publicly disclose their discovery of the flaw;
e) The manufacturers report that they “fixed the glitch within 48 hours,” and keep quiet about the customers affected.
The frustrating part of this all too familiar pattern is that it ignores the victims – the customers who already bought their phones. These customers were assured by the manufacturers’ marketing and sales people at the time of purchase that the product (a phone in this case) is very secure, and is equipped with a top-notch security system– so their privacy is assured. Software patches like the one in question are very easy to incorporate into new phones. However, it would cost money to fix the defective products already out there, and this seems deter to the companies from making the fix.
But the most interesting aspect of the situation is not what the manufacturers say, but rather what they don’t say. It should be understood that the vulnerability discovered presents not one problem but two. One is that the phones without the patch can be hacked at some point in the future. The other is that the phones already hacked are under the hackers’ control. So the most important questions is, can that control be reliably taken away from the hacker and returned to the customer? The manufacturers notoriously acknowledge the simple first problem, but quietly ignore the existence of the second, much bigger one.
In practical terms, even if the fix is installed on an affected phone, the real question is: does it neutralize the effect of the hack? In other words, if my phone was hacked, the perpetrators have established control over it. Does the fix eliminate that control? A pretty safe bet here is that it does not. The fix just prevents another hack using the same method. But in that case, what’s my phone worth now when I no longer can assume my privacy, or security of financial transactions? It looks like the manufacturers may not be complying with the implied warranty laws. At the very least this is a priority research problem for our increasingly numerous legal experts.
Every aspect of these issues is fast approaching a real-world test.– especially urgently given the proliferation of smartphone-based payment systems like Apple Pay and Google Wallet.