Blog 05: Part 03 – The Psychology of Security

It security has always interested me and while I’m not directly responsible for any IT security related decisions, it’s a bit of a hobby for me.  One thing that I’ve observed over the years from experience working in multiple different industries and environments (IT and non-IT) that there is a tendency for employees to prefer the course of action which requires the least amount of work.  It sounds silly when said like that, because of COURSE people in the aggregate are going to prefer working “smarter” not harder, but this has some pretty profound implications for IT security architecture.   There was a specific graphic in the Fujitsu security architecture document in the readings this week that reminded me of this concept.  The graphic (displayed here) shows three desirable traits of ESA: Cost, Security, and Convenience (usability), where maximizing one of those points is done so at the expense of one or more of the other.  You may have a very secure system, but if it’s not easy to use, you will have a user population actively subverting security (either intentionally or unintentionally), netting lower security overall.  Now for a couple of stories:

Figure 6-1 from the Fujitsu Enterprise Security Architecture document

Story 01: At my organization, the vast majority of internal web applications and services are authenticated via SAML and single sign on accounts in a browser.  You even need to authenticate to the proxy to visit external websites.  One of the biggest user complaints was due to the “chore” of having to type in their password each time they opened a new browser session.  How long could this take?  We’re talking the time it takes to tap out a extremely familiar set of characters, this SSO account is after all used for EVERYTHING.  But the complaints from having to type in passwords every few hours or on a new browser session grew so strong, that IT introduced a concept known as “reduced sign on,” where now the password would only be required once every 12 hours and would persist even if the browser was closed.  End user response to such a minor change was incredibly positive.

Story 02:  Passwords again, sort of!  For remote employees must first login to VPN service in order to access network resources while off network.  Authentication happens with a physical RSA hardware token and a user known PIN.  However, again, there was a huge uproar from the imposition of having to enter in the randomly generated digits from the token, to the point we had users even sharing tokens and PINs with each other, which is obviously against policy, but they claimed they NEEDED to in order to work.  The solution was a new connectivity service, which rather than prompting the user to enter in a token+PIN combo, would create an application specific encrypted SSL tunnel and authenticate via a certificate installed on the PC, all in the background.  It’s still VPN….just….hidden from the user.  Again, this new service was received very very positively,  and everyone now talks about the hellish days back when six digits had to be entered from a token.

The take away from these two stories is to not discount the user experience when architecting systems.  I think also, that security can be improved at organizations who think of ways to shepherd users towards good security practices by making those practices desirable or easier.

I’m reminded by the concept of a Desire Path which I’m sure everyone has seen.  This is a worn spot in the grass from foot traffic on campuses, parks, etc.  It’s where enough people take take the short cut and cut the corner by walking where they shouldn’t, rather than staying on the pavement. Administrators there too have a decision to make:  They could put up ropes and “Keep off the Grass” signs, probably to little avail…or they could simply pave the desired path!

 

Desire Paths – a path created as a consequence of erosion caused by human or animal foot-fall or traffic.

Blog 05: Part 02 – Software Defined Hardware

Ahh software.  Soon, everything will be software.  It’s great.  Last August I got myself a ham radio license and have been wading into the hobby over the last year, trying out different operating modes, playing with different antennas, buying super expensive radios.  I joined the local club and I am about 30 years under the average age.  And like a lot of old folks, they LOVE to complain about the state of things now and how great everything was in the past.  In this case, they do not like the fact that modern radio transceivers are essentially software running on a PC, rather than crystals, tubes, and oscillators.  It’s the same mentality I find a lot of the older solutions architects I work with have:  New tech is to be feared.

But you cannot deny the flexibility virtualization gives you.   If I pulled up any physical server that’s dedicated to a specific application, I would find 90% of the resources are never used.  You could run five virtual servers on that same hardware and the applications wouldn’t even notice.  The next big virtualization push, IMO, is going to be in the network hardware space.  Software defined networking is going to bring a huge amount of flexibility and THEORETICALLY an increased level of security, as various hosts will be logically segmented and fire-walled where today they are not due to the expense of purchasing that extra hardware or running those physical cables.

I say theoretically, because it will be critical that architects design secure networks and network administrators implement everything correctly.  Too many times I have seen lazy work network admins do some highly questionable things simply to get the heat off themselves when an application was down.  (The problem was a firewall rule was blocking a specific application.  Strike 01.  Of course, the application guys in charge of it couldn’t tell the network guys which ports/protocols needed to be passed through the firewall. Strike 02.  So because this was a production outage and everyone was yelling and pointing fingers, the network admin set the firewall to allow all traffic on any port to get it working.  And since it was working, everyone stopped caring about it and the “work around” became permanent.  Strike 03, you’re out!)

Networks that are defined by rules are only going to be as good as those rules.

At least on the hardware side of things, it’s a bit more difficult to screw up.  In GE’s Aviation business, because they have DoD contracts, they actually have two, physical, separate networks in their facilities, one for employees and one for contractors.  Contractors are completely forbidden from connecting to the primary network.  They’re effectively airgapped.  In my previous example, it is literally impossible for a unwary network admin to change a firewall rule to allow access across networks, they are physically different. But not so in the case with SDNs.

Software defined networks are going to proliferate rapidly due to the low cost and it will be critical from a security architecture standpoint to ensure 1) Robust architecture and best practices are created as standards and 2) That those standards are enforced and periodically reevaluated.

Blog 05: Part 01 – Venting

I’m going to depart from my usual format for this entry to vent a little about IT security and how it is often perceived in organizations.  While this is mostly going to be me whining, I think some of what I’m going to say is germane to this week’s topic of security architecture:  Today’s corporate culture enables poor IT security practices. 

If you ask any executive, IT or otherwise, of course they will say that IT security is critically important.  After all, data today is an enterprise resource and it makes good business sense to safeguard it as such.  And then there are the more nebulous moral or ethical considerations about obligations of safeguarding other individuals PII.  (I’m looking at you, Equifax!)  Nobody sets out to get hacked, but having security breaches are simply the culmination of years worth of poor security practices in the aggregate.  And bad luck.

 

Being 100% secure is an impossibility.  There is a point of diminishing returns where further investment in time or resources provides poor return.  The location of this line though, is usually well before that point because of the risk tolerances and financial pressure to reduce IT spend.  In my relatively short IT career, I have seen countless countless times IT executives making decisions that maximize short term costs at the expense of the long term, knowing full well they won’t be around to be responsible once the entire setup becomes unsustainable.  It is very difficult to justify spending money on intangible things like IT risk.

Oh, we need to pay $100K to fix Critical System X because there is a defect an attacker might leverage?  Hmm, I sometimes forget to lock my front door and I’ve never been robbed because of it.  I think instead we’ll not pay the money so I get a nice bonus.

Hopefully these recent hacks will make it easier for IT leaders to do the right thing.  My fear though, is that with every increasing hack, people will become desensitized to it.  And it it won’t be long until someone starts calculating that it might actually be cheaper to deal with the fallout of a breach rather than spending money to prevent it in the first place.