I have been watching the various analyst and corporate dissections of the OPM breach. They range from the sublime – "No Comment" (the White House) – to the ridiculous – "Why were the social security numbers not encrypted?" The latter, a statement from Senate oversight committee chairman Jason Chaffetz after being told that up to 14million people with government security clearances – whose data included criminal records, drug use history, detailed financial records, marital infidelities and 126 additional pages of deeply personal and compromising information on each person – had been breached.
Threatconnect posted a four-page analysis of the OPM hack. It included discussions of malware packages that were possibly used and means of connecting the hack to the Chinese. It was highly technical, well thought out and cogently presented.
But the phrase "social engineering" was used only once, in the last paragraph, as a near aside to the main threat – suggesting that the hacked data could help socially engineer someone.
This shows the typical lack of comprehension, among the technical crowd, about the craft of social engineering. Social engineering has become about 75% of an average hacker's toolkit, and for the most successful hackers, it reaches 90% or more.
I can easily find an organisation chart within OPM giving titles and names with little research. Once I have a target, the target can be "humanly" engaged. Using one example, I find the "dream" love partner, or the ideal friend, not by hacking into a database, but by observing eye movements and other body language over a small course of time and inserting that ideal person into the target's path. From that engagement and its end products, come the need for explicit technical materials that I must use to gain what I want. The more sophisticated the social engineering, the less is the need for high technology.
A simple social engineering hack might involve leaving a thumb drive on the pavement close to the driver's door of a car. The thumb drive might be labelled "naked photos" or "first quarter profits". The idea is to influence the driver to insert the thumb drive into his computer. From that point technology takes over and the majority of the remaining hack will be purely technical. On the other hand, the "dream love partner" hack mentioned above would most likely require very few technical resources once the target's password or other info has been obtained.
Frequently, no technical resources are needed.
The Threatconnect dissection is backwards. How did this tunnel vision creep into our once broadly focused security paradigm?
Step by step, is the answer. Technology is king. It is that ephemeral shape that shifts and transmutes before our eyes. It is the illusion that we chase with wonder and awe.
But the human condition, which has changed little for more than 50,000 years, is a known constant that we can rely on. We all love and hate. We dream and hope. We have fears and longings. We are ambitious, or not.
One who understands the relationships between the human heart and the human mind will always out-hack those who chase after an ever-changing technology. We have somehow forgotten this truth.
I am not dissing technology. I was once a giant in the field, or so they say. I was at least competent. But technology alone will not solve our security problems. Security is a human problem.
What do we secure ourselves against? Polar bears or great white sharks?
No. We secure ourselves against the deceptions and ambitions of our fellow humans. It is a human issue and if we forget the human element then we have blinded ourselves.
And are we not flying blind, right now, in this moment? Haven't the accelerating government and corporate breaches shown us something? Our technology has failed us and we must accept that fact.
Our technology has failed us because it evolved so rapidly that the human element was, of near necessity, ignored, as being irrelevant – as being incapable of keeping up, and therefore must be worked around.
But it is relevant, and it cannot be worked around. We are, if we are honest with ourselves, the centre of our own universes. What workaround can there be for that? We have to factor ourselves into our technologies, not as an afterthought, but as the central issue.
It is a terrible thing for a father to have to bury his own child, and something difficult for a man to do alone. But it must be done. The industry that I helped father in 1987 is dead. It has been a slow and agonizing death, and it is time for a burial. The entire antivirus industry, and all of its offshoots, have reached the end of their useful life.
The relationship between the antivirus developers and their customer base has been the following: the customer designs whatever he wants, and the antivirus world will step in and protect it. That relationship, it must be obvious, no longer functions.
When I designed the first antivirus program, in 1987, new applications for the Windows platform were being developed and released at a rate of about one new application per month. A manageable number.
Today, there are more than 10million malicious apps, according to Kaspersky labs.
The smartphone – our own personal spy device
In addition, we all carry a purposely designed spy device: our smartphones.
These, and other mobile devices are susceptible to every type of malware. They are the weakest link in our chain, and they are helping to bring us down.
Why? Because the fundamental design of a smartphone supports, more than anything else, the acquisition of data about its owner. It is information about us and our doings that is of paramount value. Here is a major human element that must be addressed.
This paradigm cannot be allowed to continue, because this design supports hacking as if it was designed by hackers for hackers, and every human within the workplace carries one.
These devices are used by employees to access, at some level, their corporate or government employers' data, bringing the weakest link into the centre of the corporate world's intranets.
These corporations and government agencies, in turn are using intranets and security systems that were not designed to handle the massive holes that mobile computing creates.
In addition, existing encryption algorithms and network containment systems are embarrassingly out of date, unwieldy, not user friendly and easy to crack using even simple human engineering techniques – a craft that is advancing in leaps and bounds – using technologies that range from psychology to neurolinguistic human programming.
Software users must be trained, both in the operation of their software, and in the means combating social engineering attacks. Coaching government employees on how to increase productivity, as the OPM is suggesting, is a laughable response to a major security breach.
Corporations and government agencies must implement, immediately, appropriate training for software users, that will alert users to potential human engineering attacks. We don't need productivity coaches. We need security coaches.
More importantly, it is time for application writers, operating system designers, and communications and networking developers to bite the bullet and factor the human element, and the hackers' toolkit of human engineering tricks, in its entirety, into a new approach to development.
I am predicting that the following approach will be nearly universal within the next five years:
Every legitimate software developer, of any kind, will have a programming staff exactly divided between developers and hackers. For every programmer, a hacker will be assigned.
From day one, when the high-level design is done by the programmer, the hacker will begin his plan of breaking into that design. Every line of code created by the developer will be passed to the hacker, who will search for ways to utilize the code for malicious purposes. If a way is discovered, the hacker informs the developer who then modifies the structure.
This constant two-way feedback should, in the end, create an ironclad module.
On the higher level, modular interactions created by the high-level designers, will likewise be monitored by higher-level hackers and the same process will play out. Ultimately, given competent designers and hackers, an ironclad system will be developed.
Now, are companies going to be willing to more than double their development costs? (The hackers will be more expensive than the developers I can promise you.) The answer is: they will have to in order to stay in business.
Are customers going to be willing to increase their costs to acquire secure programs? They will have to in order to stay in business.
How long will it take to begin this necessary change in our security development paradigm? That is up to the software manufacturers. We obviously don't have much time.