Securing personal digital identity

Much has already been said about last weekend’s dramatic and extensive expropriation and destruction of Wired writer Mat Honan’s digital life. Of course Honan was just one of hundreds of millions of digital identities which are compromised every year. The attention is a consequence of Honan’s popularity and the  perpetrator’s level of effort. This can befall any of us or any institution to which we entrust our personal information.

Some good has already come out of this incident: Apple and Amazon have modified their over-the-phone customer service policies which ought to make it a bit harder to exploit the social engineering aspect of this attack. You will take the cue and do a little “health check” of your own practices.

Am I safe enough?

Nothing can guaranty safety, but some common sense steps can make you safer. I found Sean Gallagher’s “Secure your digital self: auditing your cloud identity” a reasonably sound set of guidelines and advice. It takes a little bit of work, but in exchange for a half a day this weekend you can adopt these recommendations. You will get the half a day back in peace of mind even if you are never hacked! The only burdensome part is making sure that you have distinct, strong passwords for every on-line account. (Of course, you will need a good, secure password manager / keychain to make this work.)

Am I a responsible IT provider?

If you write, design, buy or test on-line information systems, I believe that you have an ethical obligation to understand sound security practice. Here are three sites that I have found to be good starting points.

If you haven’t already done so, I hope that you will explore these sites and incorporated the relevant technologies and methods into your work. Also, do your best to make sure that those with whom you work do the same!

Is this good enough?

All the guidelines, standards and controls in the world won’t help if we don’t actively employ critical thinking. How can the city of my birth or my mother’s maiden name (both public records) be answers to “secret questions?”

Perhaps the increased vulnerability that arises from authenticating to multiple accounts from a common source (whether Facebook, Google, or OpenID) is an unexpected consequence. But it’s hard to see that it ought to be called an unforeseeable consequence.

Sure, two-factor authentication makes technical sense for digital natives. But what about the elderly or the infirm? In a world where one needs to login to communicate to their family, manage their bank account and RSVP to the club meeting, we disenfranchise too many of our fellow citizens by rushing ahead with business and services that are ready for prime-time except for their security posture.

British Airways “Know Me” Customer Service Program to Use Google Images

The shocking thing about British Airways use of Google images to identify first class passengers is not that some have raised privacy concerns or even that BA seemed so tone deaf to the objections. The shocking thing is that so few seem to be alarmed by the program.

For example, when Gizmodo asked their readers if they were comfortable with the idea, the very first reply read “Any of my information that is freely available to [G]oogle is my own fault.” And the very next comment started “Yeah I don’t get these ‘privacy’ concerns. There used to be a time when everyone knew everyone else in town by name.”

The writer is mistaken in his believe that it is your own fault if such personal  information becomes available to Google (or other big data harvesters like FaceBook and LinkedIn). Even at its most benign, the information in question might be posted by a third party, perhaps a stranger or a firm (possibly defunct) disclosed  in violation of their own privacy policy. The information might be  inaccurate or misleading.  It might be completely false.

Or, the information might not be yours at all. It might have just   “probably” have been associated with you. Remember Portland Oregon attorney Brandon Mayfield? He was jailed for two weeks as a suspect in the the horrific 2004 train bombing in Madrid. Mayfield had the misfortune to have fingerprints that were similar to those of a terrorist wanted in Spain, a match discovered by an FBI computer. He might be in prison today had the Spanish authorities not vigorously objected. The process of computing probabilistic “many-to-many” connections on colossal quantities of information invariably leads to many such mistaken inferences.

And if you believe that disclosing personal information is “your own fault,” I recommend that you invest 7 minutes to watch Gary Kovacs’ short TED talk “Tracking the Trackers.”

Even so, why, as second writer argued, isn’t this simply a return to times of yore when we all knew each other by name? Even a couple of hundred years ago, compiling a dossier on one of your fellow villagers could could get you sued for the tort of the violation of  the right to privacy.

In nineteenth century Boston, one might sit at home and read the part of the newspaper or book as they pleased in utter privacy. But today  the newspaper and the book are on-line. So, every turn of the page is available in your data stream, ready for near real-time correlation with your location, your last restaurant visit, what you ate there, who you were with,  and the last thing that you bought at a store.

In the hands of big data scientists, this data stream produces ever more powerful inferences. And it is for sale to whomever Google (or FaceBook or LinkedIn) chooses to offer it. These are not your fellow villagers. They are huge, powerful organizations which can hold great sway over your life.

BA may be using the technology today to offer an innocuous, customized “Good morning Mr. Shear,” should I have the capacity and desire buy a first class ticket on that very good airline.

But consider, for example, targeted offers for credit, of houses for sale, of jobs available, scholarships available—all optimized against some objective profit or performance function of the sponsor’s own specific interests. The objective function can easily result de facto discrimination. If you are the victim, you will never know it. It is the targeted ad or personalized email offer that you never received.

The list of really harmful yet feasible applications is limited only by the imagination but, alas, not limited by law or regulation.

Why the Humble Programmer?

“The Humble Programmer” is the title of Edsger Dijkstra’s 1972 ACM Turing Award Lecture. The talk is a melange of personal recollections and strong opinions on relative merits of various mid-century computing issues. He discussed the emerging “software crisis” — the challenge of constructing good software in the face of more powerful machines and more demanding user expectations. Dijkstra believed that the creation of good software requires an appropriate state of mind. He wrote,

We shall do a much better programming job, provided that we … respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers.

Dijkstra was not the only one pointing to the human dimension of software creation. Jerry Weinberg’s 1971 book  The Psychology of Computer Programming provides a vivid connection between humility and the creation of quality software.   If you are a practicing software engineer haven’t already read Steve McConnell’s Code Complete, I (humbly) suggest that you do so immediately–especially the brief but profound chapter 31, “Personal Character.”

So with a tip of the hat to Dijkstra, Weinberg and McConnell, I humbly thought I’d entitle this blog “The New Humble Programmer.”