Layer 8

Security is fundamentally about people, and everything we know about people is relevant to security. -- B. Schneier

Soapbox update.

Just a quick update to say that I’m both excited and honored to be speaking at two new (to me) conferences later this year:  DEF CON 19 and Hackfest.ca (and many thanks to those wild and crazy French Canadians for the invite to the latter).  Hmm, Vegas in August and Québec in November: what could possibly go wrong?

Posted by shrdlu on Monday, June 20, 2011
(1) CommentsPermalink

Why Sony is no surprise.

The only people who appear to be surprised at how many times Sony is being breached are those who have never played defense for a large organization.

It’s very true that defenders have to get everything right 100% of the time, and attackers only have to find one flaw to be successful.  Those odds are pretty daunting, right there.  But even more daunting is the fact that the CISO has to accomplish this amazing feat indirectly, through the efforts of dozens or hundreds or thousands of other people, many of whom are resisting security.  If you could walk around and patch and configure everything yourself, you might have a running chance at doing it.  But take a dynamic environment where systems are aging, new vulnerabilities are being found every day, and people are constantly changing configurations (either for good reason or just to suit themselves), and it becomes a game of Whack-a-Mole on a galactic scale.

The problem is that good security today requires a level of rigor and discipline that is impossible for most organizations to achieve.  Imagine trying to control and standardize the use of every single piece of paper and pen in a company, every stapler, every whiteboard, and every office supply.  Banning scissors and tape just isn’t going to work either.  Imagine trying to find and get rid of every old sheet of paper that doesn’t meet the new standard size and shape.  Try explaining to management why they can’t have purple whiteboard markers, only green ones. 

Even if management agrees that security is a priority, you’re not even halfway there.  Good security requires that policies and procedures be followed—no deviations.  And deviations are the very essence of innovation, aren’t they?  For a business to grow, it needs to break its own rules, and do so on a regular basis.

The right level of security is hard enough to achieve in an organization that expects to be targeted, has always been targeted, and has evolved all its infrastructure and operations based on that knowledge and experience.  Take a company that only understood a lower level of risk in the past, and suddenly ratchet up that risk with targeted attacks, and it has no foundation with which to respond.  We’ve seen this with every level of merchant and even with security companies such as RSA and HBGary.  When you’re suddenly attacked at a new level of intensity or through a new vector that you never managed before, it’s not just a matter of finding the right OS patches to apply or fixing a few SQL injection vulnerabilities.  It’s a matter of getting an enormous group of people, most of whom don’t understand what’s needed, to suddenly all march in the same direction.  It’s a matter of fundamentally changing the day-to-day operations, and the inertia of a large organization takes a really long time to change.  In a certain sense, it would have been easier for Sony to recover from a natural disaster than from this persistent stream of attacks.

As chaotic actors rise to the fore, the rulesets we’ve been working with in security are changing.  It’s not enough to protect against opportunistic attacks.  We can no longer be confident that we won’t face a higher level of risk tomorrow; at any time we can be explicitly targeted just because we happened to piss off a critical mass of capricious people.  Organizations that have never faced this in the past will take a long time to learn and believe that it could ever happen to them.  And they won’t be able to respond quickly enough either, not when they’ve been happily living in a house of wood and suddenly need a house of bricks instead.

So I don’t have any contempt for Sony, just pity, and a large amount of empathy.  There but for the grace of $DEITY go all of us.  ALL of us.

Posted by shrdlu on Saturday, June 04, 2011
(4) CommentsPermalink

The difference between curmudgeon and curmudgeon.

I’ve read with interest both sides of the “curmudgeon” debate, and while I understand the arguments on both sides, I don’t think that it’s about seniority or caring.  It’s about maturity, which is a very different beast.

In my more than 25 years in the industry, I’ve seen the attitude promulgated that if you’re smart and have skillz, it’s okay to be an asshole.  That it’s somehow okay to hurl insults under the guise of “educating” someone and that they should be grateful for it.  That caring about something gives you permission to display your bad temper for all to see, because you’ll make up for it by doing something really cool.

As far as I’m concerned, nothing could be further from the truth.  There are plenty of egotists in the industry who think they’re entitled to a free pass on manners, and when I’m hiring, I steer clear of them, because there are just as many genius-level hackers who can also manage to behave themselves and work cooperatively with others without starting brawls.  The supply really isn’t that small that we have to take whatever we can get, and we don’t have to beg at someone’s feet for knowledge, because that knowledge is freely shared without a price tag by others in the community.

As examples, I’d like to call out a couple by name:  Jack Daniel, for example, who certainly merits the seniority and knowledge labels, and it’s clear that he cares deeply about security and has one of the most realistic outlooks out there.  He is also one of the nicest guys I know, even under pressure that would turn anyone else into a puddle of rage.  He’s a curmudgeon par excellence who also manages to act like a grown-up.  (Plus, he has the epic beard, so he’s carrying a full set of credentials in the industry.)  Nobody would call Jayson Street a n00b or naïve, and yet he also tries to help wherever he can without being a jerk about it.  There are many more like them; and these are the ones who can point out ugly truths without being ugly about it themselves.  As a result, they garner respect from all areas of the community.

There is absolutely no need to sully enlightenment, integrity, openness and honesty by adding rage (and let’s call it what it really is:  a temper tantrum).  Every honorable goal that security professionals have – be it research, defense, development or education – can be achieved without stomping on fellow humans in the process.  Age does not confer the right to bully others under the guise of “educating” them; nor does any level of experience or knowledge.  No matter how much you’ve contributed to the state of security (or think you’ve contributed – watch that ego again), you still don’t get a pass on any bad behavior, and your lack of social skills is not a badge of honor.  Every industry has its members whose actions make the rest look bad, but at least we shouldn’t be glorifying them.  We have better options right in front of us.

Posted by shrdlu on Friday, May 27, 2011
(1) CommentsPermalink

How secure does that make you feel?

I had a great chat yesterday with Joel Scambray of Consciere; and like many of the very cool people I’ve met in the security community, he said some particularly thought-provoking things almost in passing that I had to noodle about here.

As a consultant, he said that he often has to act as a psychological counselor, and that’s very true for me as well.  Just like a real therapist, we often counsel the business on how to break bad habits and implement healthy behaviors, and either they buy into them or they don’t.  Sometimes the clients agree that they really should be doing something, but we have to help them work through the barriers to implementing it—and some of those barriers might be organizational, bureaucratic, structural, technological, financial, or emotional.  They relapse; they rationalize; they drop out for a while and then come back when the pain points get too numerous or they feel ready to tackle changes again.  By the way, this underscores why it’s so important to let the business decision-makers drive the risk management:  they are more likely to follow through on a course of action if they own the decision rather than having it imposed upon them.

Like people, organizations have personality quirks and flaws.  There are some pretty basic behaviors that we try to encourage in everyone:  know where your stuff is, keep track of it, take care of it, use good judgment—but there are many different paths to take, and most people bumble along through life with their deficits largely intact.  And you know what?  That’s okay.  We have a population with obesity, halitosis, narcissism, AD/HD, irrational phobias, and cognitive dissonance, but in aggregate we get along all right.  We often lose members to overweening pride, carelessness, or a fatal buildup of bad habits that sometimes take others down with them, but the world marches on.

The other analogy that Joel used was one of investment advisor.  This is another good one because people really know they ought to be saving for the future, but do they always do it?  Do they have the knowledge?  Do they institute good saving habits?  How much are they willing to invest?  An investment advisor will sit down with you and look at your whole financial environment before making specialized recommendations, ones that you ultimately accept or reject based on your own risk tolerance.  And of course, below a certain level of income, an investment advisor just isn’t useful to someone who can barely afford to keep a car running and put food on the table.  There’s a whole security poverty line beneath which small organizations can’t afford to hire specialized security staff or a security consultant.  (As I’ve said before, it’s amazing how high your risk tolerance can get when you have no money.)  So a security consultant or MSSP is a relative luxury, reserved for the upper-middle class, and the small businesses will only drag themselves in to the equivalent of a security H&R Block at tax (compliance) time.

Food for thought, as we figure out how to secure the world in spite of itself.

 

 

Posted by shrdlu on Friday, February 25, 2011
(3) CommentsPermalink

Next-generation metrics.

I’m guessing these will never make it to Metricon, so I’m putting them here.

Metrics that a CISO will be able to relate to instead of those ALE thingies:

- the number of times you have to beg your sysadmins to patch (per release cycle)

- the number of senior executives that violate the security policies they signed off on (per month or year)

- the number of conferences your boss refuses to send you to (per year)

- the number of security topics you discuss, divided by the number of drinks you have, at the one conference you’re allowed to attend

- the number of times you discover a homegrown “crypto” function during code reviews

- the number of times a security vendor tries to go over your head to make a sale (or at least schedule a demo)

- the number of (prohibited) iPads in your building, times the number of support requests for said iPads

- the number of times you have to explain cross-site scripting, per developer, per year (bonus if you have to explain it to a “security professional”)

- percentage of #LIGATT tweets in your tweetstream per day

- the number of times a network or application problem is blamed on “the firewall”

- number of incidents that you still aren’t sure really counted as actual incidents

- number of auditors per audit instance per year, times the number of staff members that have to interact with said auditors

- number of security-related PowerPoint slides generated per year, minus the number of recycled ones

- number of desks you’ve had to replace due to head damage, per job

Posted by shrdlu on Thursday, February 24, 2011
(8) CommentsPermalink

Connecting the risk dots.

So here’s something that’s been bugging me for a while.

Please note:  I am a cheerleader by day AND night for application security initiatives.  I believe we need to make software more secure and—dare I say it?—Rugged.  However, I am also keenly aware of some of the attitudes that the rest of the businesspeople have towards security.  They don’t agree with our risk assessments, but can’t always say why.  Here’s a prime example of the hidden logic FAIL that can lead to risk misalignment between the security team and its customers.

A vendor—or analyst firm, whatever—produces a paper touting the conventional wisdom that it’s a lot cheaper to fix software vulnerabilities early in the SDLC than just before or after deployment.  And I can get behind that idea, certainly.  But the reasoning being produced to support it often ends up to be circular. 

The sequence goes like this:

1.  Claim that finding and fixing security vulnerabilities early increases ROI.

2.  Cost is calculated by the amount of money it takes to fix the vulnerabilities that were found.

3.  The initial investment is defined to be what you paid for the application security program (including testing, tools and whatnot).

4.  ROI is claimed to be the amount you save in having to fix vulnerabilities that you paid for someone to find.

When it comes to counting the investment against the cost to fix actual breaches, the whitepapers mostly get vague.  They list all the vulnerabilities found, describe how bad they were—but don’t actually show that they led to specific breaches that incurred real costs.  They’re assuming that a vulnerability is bad and needs to be fixed, regardless of whether the vulnerability is EVER exploited.

Let me turn the mirror around and show you how we look from the other, non-security side.

We come in and demand a lot of money to set up an application security program.  We test, and come out with a list of things that we say are theoretically bad and that could lead to a theoretical breach sometime in the future.  We make the developers fix those vulnerabilities that we think are bad.  Then we track the cost of fixing those, and say that they’re saving money if they fix them sooner rather than later.

Well, they could have saved money to begin with by not building an application security program at all!  If you look at it from a certain angle, the security team generated its own cost to development, and then claimed that it was saving money by having developers fix those security-generated findings at a different time.  The security team created its own additional cost to the business, and there’s nothing to indicate that the business wouldn’t just have been better off financially by not doing security testing in the first place.

We need to connect the dots better, people.  We need to trace a discovered vulnerability from its creation, through the SDLC, into deployment, and then connect it to an actual breach, leading to a real monetary loss suffered by the business.  THERE’S your ROI (or more specifically, cost avoidance).  If you can prove THAT, then maybe you can prove your risk case to the executives.

But with most of the arguments I see right now, it’s just as easy to use them to point out that the business can save a lot of money just by not inviting those pesky security people in to create more work for the developers.  That’s why many businesses reflexively resist making application security a priority.  From their perspective, it’s just going to inflate their development costs for no predictable return, even in the form of cost avoidance. 

If we want to make a financial argument for security, it’s going to have to be robust in its logic and based on evidence.

 

 

 

 

Posted by shrdlu on Thursday, January 13, 2011
(3) CommentsPermalink

You say potato, I say false positive.

I really don’t know how anyone can practically measure a false positive rate in application security testing.  Sure, there’s the case where a tool claims to find something that flat-out isn’t there, but I haven’t seen that happen much.  What is more likely is that the tool finds something unorthodox, and then a long discussion ensues on whether that means there’s an exploitable flaw there.  And even if there’s an exploitable flaw—theoretically—you still have to talk about how likely it is to be exploited (I’ll take “threat modeling” for 1=1, Alex) and what the potential impact could be.

Take “information leakage,” for example.  This is generally defined as anything from “your credit card numbers are showing” to “I can tell which Sybase version you’re using” to “boy howdy, that’s a butt-ugly error message you’re showing the user there.”  And I hate it when I end up having the following discussion with a pentester:

“This application error message indicates that you’re running Foonon 3.0009.”

“So?  Were you able to use that information to get into anything?”

“Well, no, but someone COULD.  Maybe not now, but in the future, in conjunction with other unknown vulnerabilities, and ...”

And suddenly we’re on the FUD slalom, without so much as hot cocoa waiting for us at the bottom of the slope.  People, if you can’t give me a plausible risk calculation*, then I don’t want that finding on my report.  I don’t want to have to explain it to generations of senior managers and auditors:

“It says here that there were ten of the same findings from last year.  Why didn’t you fix those?”

“They’re false positives.”

“You mean they weren’t really there?”

“No, they were there, but they weren’t exploitable and it doesn’t really matter.”

“But those are FINDINGS.  You have to make those go away or we won’t pass our Random Acronym certification.”

And I don’t want to have to go down the hall and say to a developer, “Yes, I know it doesn’t matter, but they say we have to fix it anyway.  Yes, I know nobody has touched that code in eight years.  I know this will ruin your month.  Sorry about that.  I’ll buy you a beer.”

Another case of the annoying “false positive” is in the “we MEANT to do that” category.  If someone complains to me about user enumeration again, I’m going to scream.  Yes, I know that someone can figure out whether they’re typing in an invalid username.  This is a WHOLE lot better for our users than silently swallowing the invalid username and letting them guess why they’re still not getting in, or letting them wait for a password reset that will never arrive.  Come on, folks.  I want all my “we meant to do that” so-called “findings” to be documented, and NEVER SHOW UP IN YOUR REPORT AGAIN.

Automated tools are great for some things, but they’re missing a lot of context—the environment, the sensitivity of the data, the mitigations at other layers—which is why I only want them to be used sparingly, and backed up by manual verification, so that the manual verification counts for more of the report.  The tool findings should be an appendix to the report:  one set of data among many inputs that were used in the pentesting.  They shouldn’t be the whole report, or featured up front and center.  And with the trend towards building a “pentest in a box”  (yes, I actually saw one of these at Shmoocon last year), I’m worried that automated findings will end up as the be-all and end-all of application security testing in the future.

We need more than a drive towards “lower false positive rates”—we need more reasonable positives.





* I said PLAUSIBLE, not “symbol times purple equals squirrelnoise.”

 

 

Posted by shrdlu on Saturday, January 01, 2011
(16) CommentsPermalink

Crush ‘em while they’re young.

Along with many others today, I am sad to be missing the inaugural HacKidCon, in which kids and their parents get to share the joy of hacking everything from locks to hair. 

Jack Daniel, as usual, is tweeting his commentary:

Too bad we couldn’t get someone to do a Careers in Compliance session, have to crush their little souls sometime.



Which highlights the fact that a whole “fun” facet of security is missing from the conference agenda.  And by “fun,” I mean “soul-crushing.”


Suggested HacKidCon CISO Track


Put one kid in charge of making all the other kids—of ALL ages and abilities—follow perfect table manners during a lunch with burgers, ribs, pizza, fried chicken and ice cream, according to the rules laid out in a 50-year-old copy of Emily Post.  This will teach him/her about enforcing policy in an organization.

Now put the kid in charge of making all the ADULTS do it too.  This is what it’s like to try to get senior management to follow policies.

By the time the next meal comes around, give the kid an updated Emily Post and tell him/her that this is the new standard that has to be complied with, and s/he has fifteen minutes to roll it out to a very hungry crowd.

Next, make the designated “CISO” recite “To a Young Ass” by Samuel Taylor Coleridge.  This is the equivalent of explaining PCI-DSS to a boardroom full of executives.  The child is required to use PowerPoint.

At some point in the session, the lucky HacKidCon attendee should be presented with a room full of enticing toys (some of which do not work, but that won’t be apparent until they’re used), and then told that there’s no budget for any of them.

The CISO-in-training should be in charge of investigating and cleaning up all spills in the conference facility, including where a 2-year-old took off a dirty diaper and used it to paint the walls.

And finally, the budding “CISO” should be placed in a slowly leaking rubber raft in a hotel swimming pool and given a Dixie cup with which to bail water, while helpful “security researchers” aim a stream of water at him/her from a hose from the safety of the pool’s sunbed area.



I offer to lead this track personally, if I ever make it to another con.  It’ll be cathartic for me personally, and entertaining for everyone else.

 

Posted by shrdlu on Saturday, October 09, 2010
(4) CommentsPermalink

BSOFH Interview Questions.

So while I was passed out last night, it seems Los Twitteros were busy helping me out.  I complained that I was having trouble coming up with interview questions for a new candidate, so a large number of them went to town.  BSOFHs don’t cry, but they do occasionally suffer from an overflow of vitreous humor.  Thanks, guys.

Since the Library of Congress will doubtless expunge every tweet referring to #QuestionsFromShrdlu, here they are, captured for posteriority.  [Comments from me in italics.]


@Shpantzer: When you include the hashtag #Thuglife on your twitter messages, do people nod solemnly or laugh hysterically?

@Shpantzer:  Which of the Rainbow books is your personal favorite? Recite a chapter of your choosing.

@Shpantzer:  How many packets would a snort box crunch if a snort box could crunch packets?

@Corum: The vendor is related to someone in the C-Suite. When the presentation mentions “compliancy” do you walk out anyway?

@J4vv4d:  Was Timothy Dalton the best Bond ever?

@wolfinpdx: Using your knowledge of the TCP/IP stack, Python, APIs, and an Arduino, make me a tweeting toaster.

@Shpantzer:  What is your numeric threshold for violence on the SCSoVLF? http://bit.ly/dBxgH7

@wolfinpdx:  Autobots or Decepticons? [Crowbars or Headcrabs?]

@mckeay:  What are the four closest bars to the Moscone Center and how do you social engineer your way into the parties being held there?

@wolfinpdx:  What is significant about Pat Cadigan’s “Synners”?

@mckeay:  When was the last time you spilled blood in honor of the patron saint of computers and whose?

@mckeay:  How do you get your boss’ phone to broadcast his location and what do you do with the information?

@mckeay:  How do you know I’m really @shrdlu and not just some peon who’s messing with a potential new boss?

@jaysonstreet:  Are there compromising pics of you from DEFCON online? How much are you willing to pay to keep it that way?

@mckeay:  How many times have you watched LoTR? Read? Read Bored of the Rings?

@cunningpike:  You are in a house with four windows. They all face south. What color is the bear?  [Xyzzy.  Oh wait, wrong game.]

@jjarmoc:  You find yourself in a dark corridor. A road heads north toward a light, shadows flank a cave to the east. ?  [There, that’s the one.]

@mckeay:  After all these questions, do you still want the job?

@biosshadow:  What is the proper way to eat a gummy bear? #questionsforshrdlu #yumgummybears

@VS_:  “If you knew what I know, how far away would you be right now?”

@gattaca:  Are these my pants?

Shpantzer:  How do you apply the Liebniz Rule in your daily life?

@wolfinpdx:  Pirates or ninjas? [Beatles or Stones?]

@danielkennedy74:  A scale has six bowling balls, 4 on 1, 2 on the other. Using two balls, tell me how you answer questions like these?

@armorguy:  Canadians. Threat to the Free World or the Entire World?

@mckeay:  Your CFO read an article on a plane and wants you to buy a new technology. How do you convince him your new IDS is it?

@danielkennedy74:  You’ve discovered via mail filtering your CFO’s 7 evil ex-mistresses. What strategic security investment do you pitch?

@armorguy:  @myrcurial invites you to sit “for a little chat”. What NAFTA provisions are about to be violated?

@rybolov: Pliers and a blowtorch or bamboo shoots under their fingernails? 

@mckeay:  How would you defeat the Kobayashi Maru scenario? 

@armorguy:  @securityintern: Hot or Totally Hot?

@mckeay:  What’s the difference between this place and a madhouse? (You get better drugs at the madhouse) 

@mckeay:  Two men, @Beaker and @jeremiahg offer to teach you BJJ. How fast should you be running and why? 

@rybolov:  How many 80-hour workweeks fit into a 24-hour day? 

@cyberhiker:  What is your current salary? Are you willing to take a 50% pay cut and crappier benefits? 

@mckeay:  Your daughter attends Lower Merrion School District. Who’s your lawyer and who’s your contact at the FBI? 

@cyberhiker:  What year/make/model car do you drive? Correct answers are early 90’s toyota corrolla or honda civic. 

@rybolov:  Why the hell would any sane person want this job and what does this say about you as a candidate?

@xorrbit:  vi or emacs? Choose your next words carefully $(firstname)onidas, for they may be your last… [If you’re not using “cat > $FILENAME,” you’re not really committed.]

@armorguy:  Which Dr. Who is your favorite? If not Tom Baker or David Tennant, you may go.  [Okay, Sylvester McCoy was kinda cute too.]

@mckeay:  A drunk developer let himself into the building and is shooting your servers. Call the police or join him? 

@VS_:  Arrange these numbers in correct order: 16, 18, 11, 30. Now add these names: Macallan, Talisker, Lagavulin.

@mckeay:  You’ve just found out that developers are using your live database in testing. How many bodies do you have to dispose of? How?

@rybolov:  Why didn’t you show up 30 minutes late to your own interview? 

@cyberhiker:  Name your favorite muppet, explain your answer.

@armorguy:  Who currently holds the mortgage on your soul? Are you current on interest payments?

@mckeay:  What was your favorite issue of Make Magazine and how many projects have you completed? Almost completed?

@VS_:  When did you realise you’re not Napoleon?  Certified or certifiable?

@rybolov:  Do you have more than an ounce of dignity left? How do we grind it out of you?

@mckeay:  Which is your favorite X-man? Explain.

@mckeay:  Someone has just asked if you have a hook, a half-diamond or a bogata handy. What are they and which do you have with you?

@armorguy:  In how many languages are you fluent in assorted profanities, obscenities, or vulgarities?

@mckeay:  How do you condition your liver for Black Hat and Defcon? RSA? Why the difference?

@armorguy:  At a conference @geekgrrl asks to see your phone. Do you let her? Explain your answer.  [Oh HELLZ no!]

@rybolov:  How many uses can you think of for a cattle prod? [Legal or illegal?]

@mckeay:  Where’s your towel?

@armorguy:  @rybolov approaches you with a flask. Do you drink it if offered? Why or why not?  [Of course—it’s the only way to inoculate yourself against ShmooFlu.]

@danielkennedy74:  Have you ever been in a Turkish prison? Have you ever seen a grown man naked?

@rybolov:  Can we dunk you in a pool full of pirhanas as a proof-of-concept?

@csoandy:  Explain your similarities to MacGyver.

@mckeay:  Your CFO has infected his machine for the third time this month. What do you do to his pr0n collection?

@mckeay:  You haven’t seen your family in 3 weeks due to your work schedule. Is this a) desirable b) unavoidable or c) what family?

@cyberhiker:  What is your favorite @exoticliability stripper story?

@armorguy:  After how many password resets is it legitimate to eviscerate a user?

@danielkennedy74:  1 train leaves Chicago at 11:30am traveling 112mph. 1 leaves NY at 12, at 69mph. Where do you see yourself in 5 years?

@mckeay:  The vendor has offered you bribes of money, chocolate or coffee. Which do you choose? (All 3 is an acceptable answer)

@rybolov:  Is your last name “Roberts’); drop table users;—”?

@mckeay:  How many action figures do you own? How many of them are in mint condition? How many can you part with for this job? [3 Babylon 5, plus 2 Sandman plush figures; all of them; NONE OF THEM.]

@armorguy:  Boxers, briefs, thongs, or commando? Please be prepared to show your work.

@cyberhiker:  When is the last time you sacrificed a team member for the community coffee machine?

@mckeay:  Where do you keep your pr0n? Is it separate from your anime? Why or why not?  [There’s a difference?  #notentaclesplease]

@kriggins:  When eliciting information from reluctant persons, do you prefer piercing or crushing implements? Why?  [Crushing ones are more carpet-friendly.]

@Shpantzer:  What is the proper use of the machete in the datacenter?

@danielkennedy74:  Sometimes in infosec, the best laid plans go awry. Where would you hide the bodies?

@mckeay:  Explain what the word “quine” means and why you should avoid “quine-like rages.”  [That’s a trick question.  Rage is the new Greed; it’s Good.]

@armorguy:  .40 S&W or .45 ACP?

@mckeay:  Have you ever had to scrape whiteout off of a secretary’s screen? Off the boss’s screen?

@cyberhiker:  Name your favorite open source security project. Demonstrate its use with the live CD in your bag.

@kriggins:  Upon learning a dev has been given write access to production, please describe your response to the sysadmin.

@mckeay:  In 140 characters or less, tell me your life story.

@armorguy:  Exactly which cube on the Help Desk will you reserve for @beaker? Why that one?

@armorguy: Describe, in 25 words or less, the 7 forms of ritual suicide you’ll accept from team members.

@mckeay:  What’s the best way to dispose of the body of the sales guy who won’t log off for your scheduled maintainance window?

@armorguy:  Given an IDS sensor, a Win95 laptop, and 2 patch cables - create in-depth network defense. You have 15 minutes.  [Just give me a pair of wire cutters and we’re done.]

@rybolov:  What is your current AD password?

@kriggins:  When describing the risk associated with a particular effort or action, what color scale do you use?

@mckeay:  What is the mean time between failure of a floppy drive? How many computers do you own that still have them?

@armorguy:  Auditors: Necessary Evil or just Pure Effing Evil?

@agent0x0: What is the airspeed velocity of an unladen swallow?

@mckeay:  How many ways do you have with you to open a lock, right this moment? Have you read Practical Lockpicking?

@armorguy:  At what point are you legally justified in rectally inserting the IDS appliance into the salesperson?

@rybolov:  Which 3-letter vendor do I hate?

@rybolov:  How many seconds would it take you to lock the AD administrator in the datacenter and trip the halon system?

@mckeay:  How do you make a Hoffaccino? And how do you survive drinking one?  [Hoffaccino is the new Pan Galactic Gargle Blaster.]

@rybolov:  Users: weakest link ever or weakest link ever? #HereHaveASoftball

@mckeay:  Whose secretary do you make friends with first? The CEO’s, the CFO’s or the receptionist? [Tip number one:  do NOT call them “secretaries.”]

@armorguy:  What’s your favorite compliance framework? (Note:This Is A Trick Question)

@rybolov:  When did you stop beating your existing staff?

@rybolov:  Let’s do some roleplay, shall we? I’ll be the CFO and you can be the CISO groveling for more budget.  [Do I get a safeword?]

@armorguy:  How many ways to you know how to kill a man? How many to resuscitate? Why the discrepancy?

@cyberhiker:  When is the last time you saw “War Games,” “Sneakers,” “The Matrix” and “What about Bob?” (Must name all 4)

@rybolov:  How many executives have you blackmailed this week/month/year over their web browsing history?

@armorguy:  Are you currently a fugitive from ISACA, ISC^2, ISSA, or the PCI Council? If not, why not?

@kriggins:  Describe to me your calming process when faced with abject stupidity or willful ignorance.  [It involves a machete.]

@rybolov:  Have you ever used a machinegun to stem a wave of human attackers?

@cyberhiker:  When is the last time you made a small child cry? If this morning, was the child yours or one you just met?

@armorguy:  Do you realize that the fact you want this job tends to disqualify you? Can you explain yourself?

@cyberhiker:  Looking at your resume, you are clearly qualified. What the hell did your parents do to you?  [They let me read Heinlein at an early age.]

@cyberhiker:  Do you promise to not throw me under the bus as soon as you take over?

@rybolov:  Can you juggle flaming chainsaws?

@cyberhiker:  Do you suck? And if not, will you continue to not suck?
 
@armorguy:  How do you crush the soul of a department manager?  What’s the access code to the Pit of Ultimate Darkness?

@mckeay:  “Are you willing to work ridiculously long hours with little recognition and even less pay?” is a good start [That’s how my last job started ...]

Posted by shrdlu on Tuesday, August 31, 2010
(4) CommentsPermalink

MacIntel.

It’s getting really hard to find something to add to the Intel-MacAfee pile-on, but Bruce Schneier posted a comment that is worth repeating:

What we’re going to see is consolidation of non-security companies buying security companies. So, remember, if security is going to no longer be an end-user component, companies that do things that are actually useful are going to need to provide security.



Which makes complete sense.  Security really shouldn’t be a separate discipline. When done right, security is a shadow organization of all of IT.

Please note:  I am not using “shadow” in the sense of “opposition,” although that frequently happens (NVPs).  I’m also not necessarily using it in the sense of “the real power behind the throne,” although that happens too.  I’m using it in a more analogous sense: that every aspect of IT has an aspect of security to it, and we should be so closely aligned with IT itself (which itself should be so closely aligned with the business) that if we do our jobs right, you should only catch a glimpse of a shadow going by. 


(Then again, in security we also end up knowing what evil lurks in the hearts of disk drives ...)

 

Posted by shrdlu on Friday, August 20, 2010
(1) CommentsPermalink

Mrs. Grundy’s tweetstream.

Bob Blakley just tweeted a quote from a talk that set me off:

“If you integrate social networks & business, is HR going to ask you not to swear on the weekend?”

My very lawyerlike answer:  it depends.

Social networks are here to stay, water is wet, and too much sugar is bad for you.  We still don’t have new rulesets for privacy to go along with social media, so there are a lot of people blundering about doing the wrong thing.  As every socmedaware geek will be happy to tell you very loudly, over drinks at a party, it simply has not sunk in with most people that what you post on the Internet is easily readable—no matter where it is or what controls you thought you put on it—and will be there forever.  It has, however, started sinking in that ANYONE with a grudge against you can spread that grudge very publicly, and there’s little you can do to stop it.

The combination of those two realities means that the Mrs. Grundy of yore, peering at you from behind her curtained window and gossiping about you at church, ain’t got nothin’ on the Internet.

Once everyone has had his/her fifteen minutes of fame in real life and fifteen weeks of notoriety on the Internet, I expect we will settle down and stop pointing fingers at every slip.  We will learn how to ignore published lies from bloggers and turn a blind eye to Facebook indiscretions.  Until then, though, we have to hope that businesses will figure it out faster and will treat employees appropriately.

Does your employer have the right to monitor and govern your use of social networks?  It depends an awful lot on context and the type of job you have.  People who have positions requiring background checks are already used to this; I don’t expect they would bat an eye at the thought of the same investigator reading everything about them online.  Those who are public figures by profession also understand this. 

The shady area starts in the public sector, where individuals who are public servants by day (say, answering the phone or administering databases for a local governmental entity) expect that they will be allowed to clock out of this role when they go home at night.  There are thousands of types of public sector jobs that do not involve being regarded as a public servant 24/7.  So those people would like to be left alone in their houses, their places of worship, and their Friendster accounts.  They did not sign up for cameras watching their every nose-pick at their desk, or the equivalent, in the name of transparency, accountability and whatever else. 

As you go up the ladder, though, whether it’s in the public or the private sector, the rulesets clearly change, and everyone wants to know what the CEO of BP was doing on his off hours, especially if it could be turned against him by anyone with an agenda.  “CEO OF BP SHOPS AT EXXON GAS STATION; BUYS SUNGLASSES AND KIT-KAT BAR!”

So it’s clear that gossip-worthiness falls along a spectrum, and we kinda know where that is in real life.  Due to the power of search facilities, though, it’s not clear yet online.  Our power to find out every mention of a person on the Internet is disproportional to how much we would hear about him in real life.  It’s as though everyone came with his own newspaper now, and his own dedicated spot on the bulletin board at the Y.  Somewhere deep down, we believe that because we can know so much about someone, he must be more of a public figure, and therefore must submit to the same privacy rules that public figures have.

In the public age of the Internet, we don’t know who public figures really are anymore.

I know too many people who take the attitude that because someone has an online presence of any kind, he has agreed to be a public figure with all that entails, and deserves every inspection or thrown tomato that he gets.  They want to punish the grandmas and grandpas of the world for being “foolish” enough to put something out there in one community without understanding that there are no walls around communities on the Internet.  This is not in the least bit helpful, and frankly, it smacks of technical elitism.

Having said that, we get into trouble when a non-public figure becomes too easily searchable or too ubiquitous online—say, because he has a unique name or is renowned in a large discussion forum.  When a person is too visible online, it becomes more difficult to separate him from his day job, no matter what it is.  A person who sticks out too much on the Internet becomes a reputational risk to his employer, and there’s just no getting around that.  This would be the same if he were, say, running for public office or writing a column for a national magazine in his spare time.  In this case, if the visibility is too high and it is too negative, the employer may have every right to say, “You’re damaging our reputation in a way that we have no way of stopping except by disassociating ourselves from you.  Here’s a box for your personal belongings; it’s been nice knowing you.”  Or the HR conversation may involve the words “conduct unbecoming to a board member/public servant/officer” and a disciplinary action.

We need to make sure that employers know the difference between visibility and searchability.  When there is no expected visibility (and therefore reputational risk) attached to a job, the employer should not be searching online in non-professional areas for mentions of the person who is filling that job.  When there is no real-life requirement for a background check, the employer should not be doing the equivalent of a background check online.  That is disingenuous of the employer and demeaning to the employee; it’s as if the employer were insisting that the employee comply with business dress code on the weekends.

In other words, social network monitoring should be commensurate with the real responsibilities of the job, not commensurate with what is technically possible.  I hope that employers will fall into line with this soon if they haven’t already.  They should be able to defend logically the level of their surveillance (and let’s call it that, because that’s what it is).  If they can justify interviewing Mrs. Grundy on her doorstep, then they can justify searching her blog. 

But they’d better watch their own backs when they get online at night.

 

Posted by shrdlu on Friday, July 30, 2010
(1) CommentsPermalink

For or against.

WARNING:  SWEEPING GENERALIZATIONS AHEAD.  Watch your feet.

I was pondering earlier why there appears to be such a large cultural gap among some areas of security, why some pockets of the security world are dismissed as irrelevant by others.

I think it has to do with attitude.

Some security professionals I know—who more likely than not come from the defense and law enforcement sides of the house—approach security questions from the perspective of defending against bad guys.  They spend all their time and energy on war planning:  “We’ve got to stop the users from hurting themselves and us!”  “Loose lips sink ships!”  “We’ve got to make them understand how DANGEROUS it is out there!”  “Let’s calculate the risk to the last 15 decimals and maybe they’ll believe us.”  “We need more policies and rules around this type of action.”

Others are trying to make things work in a safe manner.  How can we enable cloud customers to audit their environments?  How can we create an open, trustworthy method of ID management?  How can we help users become more secure in an environment that is way too complicated?

I think I know which type of professional everyone else outside of security wants to work with and listen to.  If you are not working in defense or law enforcement, you don’t make them part of your daily business.  You don’t call the FBI to sit in on business meetings; you only call them if and when you get sufficiently pwn3d that you need help with prosecution; otherwise you’re going to call a commercial incident response company.  When you want to roll out something new, you don’t ask the FDA to come help you design it and market it.  You deal with them only as much as you need to—as much as regulations force you to.  Anyone whose only tune is OMGWTFCYBERWAR! is not going to be invited up to the karaoke stage.

So which type of security professional are you?  Are you a fighter or a builder?

Posted by shrdlu on Sunday, July 11, 2010
(6) CommentsPermalink

Crazy talk.

Several people have been pointing out that security is fundamentally broken, and we need some radical adjustments to fix it.  I’ve also been re-reading Thomas P.M. Barnett’s The Pentagon’s New Map, in which he argues that globalization has disrupted the old rulesets that we formerly used as a society, and we need a bunch of new rulesets.  Substitute “globalization” with “disruptive technologies,” and I think we’re onto something in the infosec space.

Take as an example the fact that everyone is talking about needing a “mobile device security policy.”  These policies tend to fall into three categories:

1) No.
2) Only with the mobile devices we give you.
3) Uh ... is that the new iPad?  Can I see it?

Number one is idealistic and completely impossible to enforce, unless you bodily frisk everyone walking in the door of your company.  And even that doesn’t work when your employees just go out into the parking lot for a smoke^Wsmartphone break, as Trevor Hawthorn pointed out in his ShmooCon talk, in which he found out that a couple of the game buddies he was trailing via a smartphone game worked at Highly Secured Locations (around 30:00 into the video).  Oops.

Number two is also unrealistic, for the same reasons as number one.  People will happily take your company’s downscale Blackberry, AND bring their iPhone into work for the really cool stuff.  And forget about BES; I think it’s going to be obsolete as soon as people figure out that as long as you have any kind of browser-enabled remote access to email, you can get it downloaded to your smartphone.  I see my executives do it all the time.

Anything that doesn’t use a browser as its interface is probably going to be irrelevant pretty soon.  The browser is where the cyberwar’s at.  The browser, and ports 80 and 443.

So I’m calling the game, folks.  We.  Have.  Lost.*

Like it or not, we have been moving steadily from the world in which everyone in a building used one teletype to connect to the computer, to a bunch of hardwired terminals, to desktops that “belonged” to the building, to the browser, which doesn’t belong to anyone and can’t be physically controlled.

Remember the briefcase?  (Does anyone younger than 40 even know what one is?)  When I was growing up, every man with a desk job had a briefcase in which to take home work.  (Yeah, I said “man.”  I’m that old.)  My dad had one.  Of course, he took really good care of it, because it also had his wallet in it.  It was big enough that it was pretty hard to forget about.  Now, the military and law enforcement got cool handcuffs that came with their briefcases, but not anyone else.

We’re still trying to adapt what used to be physical controls to software. 

The Jericho Forum waded into this disrupted chaos with the recommendation that we head towards de-perimeteri{s,z}ation, which is a very good step in the right direction, but I suspect we need to go even farther than that.  I think we need to pull back (run away, run away!) from battling with the user over what are now essentially office supplies.  Let’s face it:  mobile computing has become the (very fancy) equivalent of a phone, and notepad and a pencil—and we can no longer dictate AT ALL how people use them.  (“No, you can only use OUR pens, and you can’t use them to write naughty words.  That’s a violation of company policy.  If you do it, we will send you a memo and tell you very sternly not to do it again.”)

Once I came up with the idea of putting every employee’s SSN on any USB drive they plugged into our desktops, to motivate them to take good care of it.  My boss wasn’t down with that, surprisingly enough.  But I think we still need to find the motivation.

So here it is:  maybe we SHOULD throw our lot in with our employees.  They’re putting their personal data on our desktops; they’re putting it on any PDA we hand to them; and likewise, they’re getting peanut butter in our chocolate by putting corporate data on their privately owned smartphone/tablet/pad/doily.  Maybe we should embrace this, and work TOGETHER with the employee to secure BOTH kinds of data, since they’re going to be sharing all the same browser space and hardware and OS and wifi.

Now, the military and law enforcement can probably continue to get away with bodily searches, secure paper and tactical pencils, but the rest of us can’t.  We are no longer in charge of how and where our communication and work tools are used.  We need to stop demonstrating, over and over again, Einstein’s definition of insanity, and do something completely different.

Do I know how to do this?  Of course not.  I’m hoping that smarter heads than mine will take the ball and run with it.  But I’ll be cheering you on, and maybe one day it won’t even sound all that crazy.




* UPDATE:  George V. Hulme’s comment made me realize that I didn’t explain things very well.  We have not lost the fight against “the bad guys.”  We have lost it against our users.  And let’s face it:  today’s security models make it really hard to tell the difference between the two.


UPDATE 2:  To follow Barnett’s model, if we really need new rulesets around security and privacy in this area of disruptive technology, maybe the policies that we (security) are creating are holding society back from developing those rulesets.  We’re keeping them from the real objective by turning them against us, the security folks, instead of addressing the real gaps.  (Thanks to @greg_pendergast for that one.)




 

Posted by shrdlu on Saturday, July 10, 2010
(2) CommentsPermalink

The exception IS the rule.

I see a lot of frustration in the security community about breaches that happen because an organization didn’t have controls or configurations in place that we consider to be “the right ones.”  (I won’t use the words “best practice,” or even the word “standards,” because God may move on from killing kittens to killing other adorable pets.)  The consensus seems to be that if everyone just had “the right controls” in place uniformly, security would improve, and the voice of the turtle would once again be heard in our land.

Folks, it’ll never happen.  And by “never,” I mean SO never that you should probably never have asked for it to begin with.

I will posit to you that managing security in an enterprise is not about managing controls; it’s about managing exceptions.

For every tool setting, there is an equal and opposite exception.  (@nselby will know what prompted this.)  If you look at a firewall, it’s pretty much one big exception right there:  it’s a device you use when you have to connect two networks together even though you know you really shouldn’t.  Every firewall rule that doesn’t have a “deny” in it is by definition an exception.

When an auditor comes a-knockin’, nearly everything she will ask you about is an exception.  Why is this account still active?  Why don’t you have setting X enabled?  And for some of them, the answer will be, “Uh, we forgot,” but the most frustrating times are when you have to explain that you MEANT to do that; that there’s a solid business reason (hopefully with risk mitigation behind it) that just won’t allow things to be “standard.”  I’ve known auditors who get that, and auditors for whom the word “exception” causes their heads to explode.

Here are some (maybe unspoken) rules around exceptions:

1)  Exceptions need to have at their core a business reason for existing.  (The proximate cause might be technical, but not the root cause.)
2)  Exception decisions need to be made by the business, or by the business’s designee (often the CISO).
3)  Exceptions should have a defined lifecycle and TTL.  (They might be there until you get off the mainframe, but they’re still understood to have a limit, not be a permanent dismissal of the control itself.)
4)  Exceptions need to be documented to the extent that they can be reviewed and/or explained at any time.

And this is where just about every security technology falls flat.  They all assume that you will have every configuration of a tool at the “optimal” setting, the one they designed it for.  Nearly all of them make it hard, if not impossible, to manage the exceptions in a consistent, consolidated way.

Every time you tune an IPS, you’re putting in exceptions.  Every time you find a scanner hit that you know isn’t going to get fixed, you have an exception.  (Ever tried to document exceptions in a 500-page PDF scanner report?)  Baseline traffic is very hard to determine until you’re completely aware of the exceptions (otherwise known as, “This IS normal, you idiot.”).  Every file share, every non-expiring password, every patch you can’t apply—they’re all exceptions.

Some CISOs try to list some of the major exceptions in a spreadsheet, but it’s nowhere near the scene of the crime(s), and you can’t possibly keep up with all of them.  Tools need to come with easy, immediate exception management, so that for every setting, you can explain why it’s there.

Whenever you look at a firewall rule, half the time you’re going to be asking yourself, “Why is that there?  Did *I* put it there?  Do we still need it?”  It would sure be nice if the explanation were right there, as a comment that could be version-tracked, exported into nice reports, searched on, and placed in a standard format that would be compatible with other exception entries in other tools.  (Kind of like a syslog for exceptions.)  It would be nice if you could mark a scanner finding as, “We KNOW it’s there.  We’re not going to fix it.  Just for these two machines, STOP REPORTING ON THIS.”)

Imagine a world where the CISO could print out a report of all exceptions granted to the Bahama office (those rogues).  Not just all exceptions in one tool, but in EVERYTHING that has a security setting.  Imagine being able to go through a report and immediately identify non-standard settings that you DIDN’T intend to put there, as opposed to having them get lost in the noise of all the ones you DID intend.  And no, I’m not talking about Unified Threat Management, unless you consider an exception to be a self-inflicted threat.

I’ve been asking for this kind of enterprise tool management capability since, oh, 1998 or so.  Won’t somebody please step up and make it happen?

 

 

Posted by shrdlu on Friday, July 09, 2010
(3) CommentsPermalink

It’s not about identities, it’s about attributes.

It’s too bad that the stock term “identity and access management” leaves out the bridge between “identity” and “access”—and that is the relevant attributes attached to that identity.

Finding out whether someone is who he claims to be is pretty straightforward for the purposes of data collection.  We do it every day with passport numbers and driver’s license numbers and so on.  But that in itself is not enough to grant and manage someone’s access to a system.  There are also methods for disambiguating identities within a system, whether it be by unique username, user ID, email address, or a combination of demographics (by the way, don’t try that last one at home for any really big system).  We can do those parts of an IAM system.

But managing access is all about deciding WHAT that person needs and, most of all, WHY—and the WHY requires attributes.

Different systems have different business rules for their access.  You might be granted access to a system because you’re a parent of a particular child; or because you’re an employee of a particular company; or because you’re a customer of a particular outfit.  You might be a beneficiary of a service, a contributor of content, a Person of Size, or a candidate for office.  One system won’t care whether you’re a parent, but it will care very much whether you’re still an employee.

So a system either explicitly or implicity assigns attributes to the identities it uses.  A “title” might imply that the user is an employee.  A relationship, such as “is related to $child,” would indicate that the user is a parent.  Some of the attributes, like contact information, might be important or might not be, depending on the business rules; and that determines whether that information is validated and actively managed.  You want to keep validating that someone is still an employee of your organization, but you might not care so much if he moves from Apartment 202 to Apartment 203 at home.

These attributes might lead to a user being assigned roles, but you shouldn’t confuse the two.  The condition of an attribute determines which role(s) you’re assigned and for how long.  For example, if you are the parent of a student at a school, you might be assigned a Parent role in the school’s grade-monitoring system, but if you lose custody of that student—or once the student leaves that school—you will have that role removed, along with the access.  (Well, you *should* have it removed.  Thus Spake the Auditor.)

This is why identity management is so hard to pull off.  Even when two business areas agree that they need to manage the same attribute for access to their systems, they may disagree on lifecycle.  One area might only be concerned with active customers, and the other might want to keep a user in its system if she has EVER been a customer, regardless of current activity.  One area might have rules for validating an attribute that are unrelated to, or conflict with, the rules of the other business area.  An identity might have multiple values for the same attribute (working for McDonald’s AND Burger King).  The architect of an IAM system has to juggle all these aspects, and more.

Your only hope is to make all these attribute rules and assumptions transparent, so that you can have a running start at keeping them all in line.  There are few things hairier than discovering that your business area was using a non-validated field in a database for crucial business decisions.  This is also what keeps shared databases from being redesigned, by the way:  when a field means different things to different people, and the meanings aren’t codified or documented anywhere. 

I suppose it’s too late to add another “A” to IAM.  Oh well.  If you take up drinking, you might be able to see the doubled “A” or you might not, but at least it will ease your IAM headaches.

 

 

Posted by shrdlu on Thursday, June 10, 2010
(0) CommentsPermalink
Page 1 of 15 pages  1 2 3 >  Last ›