Skip to main content

Help Me Obi Wan – You’re My only Hope: Three Cyber Security Innovations to Give You Courage

Executive Summary: 

With all of the negative press about how weak the collective good-guy cyber defenses are, there is reason to hope. Today I discuss three cyber security innovations that not only work but will fundamentally change how we will all do our jobs in the future. Some of our community are leaning forward with these ideas and showing us the way. They are teaching us how to transform our tactical Incident Response teams into strategic intelligence organizations. They are changing our old-school thinking of deploying tactical signature defenses into the more modern Kill-Chain and Indicators-of-Compromise methodology. And, they are breaking new ground on how to share threat indicator information between peers. 


Introduction

My company tagged me to speak at an upcoming cyber security customer event this week. When I asked the event organizers if they had anything specific they wanted me to cover, they said to discuss all of the leading edge things going on in cyber right now; the exciting stuff. “Make it informative but interesting,” they said. “Show why we are a thought leader in the space. And oh, by the way, you have 10 minutes to do it.”


No Problem.

Innovative Thinking

This got me to thinking about the state of the Cyber Security community. There really are leading edge things happening in the community right now; game changer thinking and processes that will fundamentally transform how we all do our jobs in the future. In my old iDefense days, we used to call these things Cyber Security Disruptors [16]. The well-informed, or shall I say the forward leaners in our group, have been doing these kinds of things for years. The rest of the community though is just starting to get their hands around the issues. To me, there are really three cyber security disruptors that are influencing the community today: 

1: The transformation of the traditional Incident Response team into a higher level Cyber Intelligence team.
2: The transition away from the customary, perimeter-defense, signature-based technologies towards a more robust methodology-based Indicators-of-Compromise set of technologies and processes.
3: The 180 degree reversal of thinking that sharing threat information is bad. In fact, this is a good thing and we need to find better ways to do it.

Intelligence

For the past decade or more, Cyber Security Operations Centers (CSOCs) have been in the Whack-a-Mole Business: see a threat – block a threat. When we started doing this back in the late 1990s, this was a perfectly reasonable thing to do. Today, the practice seems almost quaint. The bad-guys have gotten better and the things that worked for the good-guys back in the day need to get better too. It is like we built all of these moats and thick walls around our castles to stop the infantry and cavalry a decade ago and expect those same defenses to work after the enemy upgraded to laser guided missiles and drone attacks. Not likely.

What has emerged is this idea that a cyber-attack is not really a discrete event. The bad-guys are people too. They have motivations like crime, hacktivism, espionage and war. The successful ones develop repeatable processes that they can apply to different targets. If the good-guys block a particular attack, the bad-guys do not say to themselves, “Oh wow, I am blocked. I should close up shop and go home.” As an old Army boss of mine used to say, “The enemy gets a vote.” They see what the good-guys did and engineer a work-around.

Raising our awareness up a notch or two, from the bad-guy’s discrete attack to their long range attack campaign, is the direction the security community is heading. The government has been doing this since the Titan Rain days [17]. Forward leaning commercial organizations, mostly from the financial sector and from the Defense Industrial base (DIB), have been doing it for the past five years or so. The rest of us are playing catch-up. The way forward is to move away from reacting to a particular incident and to develop the techniques that will enable your organization to gain the understanding about how the adversary operates in the aggregate [3] [4]. The way forward is to determine how to transform your old Incident Response team into an intelligence organization. The first step is to understand the Intelligence Cycle.

The first time an author used the phrase, “Intelligence Cycle,” was probably in a book called “Intelligence is for Commanders” by LTC Phillip Davidson and LTC Robert Glass in 1948 [1].These two US Army intelligence officers wrote an after action report of sorts regarding how the Army used intelligence in World War II. In the book, they describe the Intelligence Process as a cycle. Although the Intelligence Cycle model is simplistic and has its critics [2], every US government organization that produces intelligence uses some form of the Intelligence Cycle to do its work.

For cyber security, the government and forward-leaning commercial organizations have adopted this approach too. The point here is that the rest of the commercial cyber security community needs to follow suit and probably will over the next five years.



Signature-Based and Methodology-based Defenses

For the past 15 years, commercial organizations have been deploying the Defense-in-Depth model; Firewall and Intrusion Detection type devices on the perimeter and anti-virus kinds of software on the hosts. If you have followed the news at all this past decade you know that these defensive measures are nothing more than a speed bump to the highly motivated adversary. Signature-based defenses are predicated on the idea that network defenders can build a gigantic electronic wall designed to keep the adversary out. They focus on a single-tactical behavior that the adversary uses with no context about what he is trying to accomplish.

That idea started to change with the publication of the seminal paper on Intrusion Kill Chains in October 2011 [18]. The authors -- Hutchins, Cloppert and Amin – changed the paradigm. Instead of trying to keep everything out, they decided to assume that the adversary would get in somehow. If they allowed that assertion, what could they learn about how the adversary operates? The result of that forward thinking is that the adversary has to accomplish six distinct tasks [18]:

1. Reconnaissance: Research, identify and select targets.
2. Weaponization: Couple his intruder code to a deliverable like a PDF, Word document or email message. For specific targets, he might craft his deliverable to catch the interest of the target (Spear Phishing).
3. Delivery: Transmit the weapon to the intended target.
4. Exploitation: Run his intruder code on the victim’s host and ultimately, take ownership of the target machine.
5. Installation: Download and install more software to the target machine that allows him to maintain persistence inside the target’s network.
6. Command and Control (C2): Establish a command channel back through the Internet to a specific server.
7. Actions on Objectives: Exfiltrate documents and move laterally inside the network compromising more machines.

The good news here is that the adversary has to be successful at every link in the kill chain in order for his campaign to be successful. The good-guys only have to succeed once in order to thwart the attack. Stop the intruder at one link and the chain breaks. The bad guys have to start over. This allows the network defender to develop strategies around every link in the Kill Chain. In order to do that, he needs good cyber intelligence; specifically, the network defender needs to understand exactly what the bad-guys do at each link in the chain.

Enter Mandiant.

Kris Kendall, working for Mandiant at the time, introduced the idea of Indicators of Compromise (IOC) at the annual Blackhat conference in 2007 [15]. Since then the idea has really caught on. Indicators of Compromise are sets of forensic artifacts that describe an attacker’s methodology [9][10]. Good ones illuminate a hacker’s entire plan.

The difference between a signature and a set of IOCs is that with the signature, the network defender defines one way that any attacker might defeat the electronic wall. With a good set of IOCs, the network defender understands the blueprint behind a specific attacker’s entire campaign plan. If the network defender gets a hit on a deployed signature, he is not sure what is going on or even if the alert is true because Signatures are prone to false positives and carry no context. Conversely, if a network defender gets a hit on a number of IOCs in the same set, his confidence level is not only high that an attack is in progress, but that there is a good chance he knows which hacker or hacker group is attacking and what the next move is.

For just about five years now, mature-cyber-security-intelligence organizations -- like some groups in the government, the financial sector and the Defense Industrial base -- have been developing IOCs across all layers of the Kill Chain for specific cyber-espionage-hacker groups. Depending on who you talk to, there are some 20+ groups that have been active in the last 10 years. Mandiant famously, or infamously depending on your viewpoint, released the entire set of IOCs for one of these groups in time for a big marketing push for their cyber-security product at the annual RSA Conference this year [19].

What we have discovered over this five year period is that these adversaries are not supermen. They do not have unlimited resources. And they have repeatable processes (Methodology) that we can capture (IOCs) and watch for across the Kill Chain. The trick now is to figure out how to pass along these IOCs to the less mature organizations.


Cybersecurity Information Sharing

As far back as the late 1990s, no commercial organization would ever willingly share threat information with a competitor. The thinking at the time was that this act would aid to your competitor’s bottom line and take away form yours. The security community dreamed of a utopian environment where all information was free and we would willingly help each other out. But the business leaders we worked for laughed out loud when we expressed these flights of fancy.

And then, two business sectors decided to throw caution to the wind and fix it: the Financial Sector and the Defense Industrial Base. Both came at it from two completely different angles and both do it differently, but these two groups of commercial companies have spent years developing the trust and processes necessary to share Threat Indicator information (Indicators of Compromise) with its member organizations. They have broken ground for the rest of us to follow.

With the Financial Sector, it all started with Presidential Decision Directive (PDD) 63 signed by President Clinton in May 1998. PDD 63 called for immediate action within critical infrastructure companies in four areas: Information Sharing, Outreach, Research and Vulnerability Analysis [20]. This resulted in the formation of various Information Sharing and Analysis Centers (ISACs) across multiple sectors (IT, Energy, Communications, etc). Today, the Financial Sector ISAC (FS-ISAC) is arguably the most robust in terms of member organizations and the most successful in terms of sharing Threat Indicator information between members.

Not to be out done, the Defense Industrial Base (DIB) took a shot. The Deputy Secretary of Defense directed the establishment of the program in 2007 in order to facilitate threat information sharing between commercial companies that support US Government contracts. Department of Defense (DOD) stakeholders are DISA (the Defense Information System Agency), DC3 (the Defense Cyber Crime Center) and USCYBERCOM (US Cyber Command). DISA is responsible for the development and maintenance of the DCISE (the DIB Collaborative Information Sharing Environment). The DC3 manages the DCISE for DISA. USCYBERCOM is responsible for sharing threat information with DIB partners [5][24][25].

Both groups work. Neither is without some pain but generally, the member organizations of both of these groups have a more mature understanding of the threat picture than their counterparts that do not participate. The rest of us have looked at these two success stories and have started to walk in their general direction. The trick the community has to solve now is how to seamlessly and effortlessly share IOCs with each other and with everybody else.

On February 12th of this year, President Obama released an Executive Order – Improving Critical Infrastructure Cybersecurity where, among other things, he said that by June 12th of this year:

“… the United States Government [will] increase the volume, timeliness, and quality of cyber threat information shared with U.S. private sector entities so that these entities may better protect and defend themselves against cyber threats … [T]he Attorney General, the Secretary of Homeland Security … and the Director of National Intelligence shall … ensure the timely production of unclassified reports of cyber threats to the U.S. homeland that identify a specific targeted entity. [21].” 

Only time will tell how successful this Executive Order will be, but the cyber security community is gaining momentum in the information sharing domain. We all have one last problem to solve though. Now that we want to share information, how do we do it?

This discussion is just starting and there are a myriad of formats to choose from. Mandiant is pushing their own format called OpenIOC [10] [11]. The IETF is pushing their standard: the Incident Object Description Exchange Format [12]. And Mitre has two sister technologies called STIX and TAXII (Structured Threat Information eXpression and Trusted Automated eXchange of Indicator Information) [22] [23]. The community has not decided on one standard yet, but the discussions have begun.

The point is that Threat Indicator Information Sharing is in our future. Some forward leaning organizations have already begun. It will not be long before the rest of follow suit.


Conclusion:

There is a lot of doom and gloom in the press lately about how weak our collective cyber defenses are. But all is not lost. True thought leaders have broken new ground for the rest of us to follow. They are transforming our tactical Incident Response teams into strategic intelligence organizations. They are changing our traditional moat and gigantic-electronic-wall signature defenses into the more modern Kill-Chain and Indicators-of-Compromise methodology. And, they are showing us the way on how to share threat indicator information between peers. The future looks good to me.

Sources:

[1] “RFI: Who Invented The Intelligence Cycle?” by Kevin Wheaton, Sources and Methods, January 4 2011, Last Visited March 3 2013.
http://sourcesandmethods.blogspot.com/2011/01/rfi-who-invented-intelligence-cycle.html

[2] “Critiques Of The Cycle: The Intelligence Cycle Vs. Reality.” by Kevin Wheaton, Sources and Methods, May 27 2011, Last Visited March 3 2013.
http://sourcesandmethods.blogspot.com/2011/05/part-6-critiques-of-cycle-intelligence.html

[3] “Getting Ahead of Advanced Threats: Achieving Intelligence-Driven Information Security.” By the Security for Business Innovation Council, 2012, Last Visited March 3 2013.
http://www.rsa.com/innovation/docs/CISO-RPT-0112.pdf

[4] “Establishing a Formal Cyber Intelligence Capability,” by iDefense, Verisign, May 5 2012, Last Visited March 3 2013.
https://www.verisigninc.com/en_US/forms/idefensecyberintel.xhtml

[5] "DC3: Defense Cyber Crime Center:  A National Cyber Center," Air Force Office of Special Investigations, 2 May 2013, Last Visited 6 June 2013
http://www.dc3.mil/

[6] “US Army Field Manual FM 2-0: Intelligence,” by the United States Army’s Publishing Directorate, Headquarters Department of the Army, March 23 2010, Last Visited March 3 2013.
https://www.fas.org/irp/doddir/army/fm2-0.pdf

[7] “Psychology of Intelligence Analysis,” by Richard Heuer, published by Novinka Books, January 1 2006.

[8] “Analyzing Intelligence: Origins, Obstacles and Innovations,” by Roger George and James Bruce, published by Georgetown University Press, April 9 2008.

[9] “Understanding Indicators of Compromise (IOC) Part I" by Will Gragido, RSA, 3 October 2012, 
Last Visited 6 June 2013.
http://blogs.rsa.com/understanding-indicators-of-compromise-ioc-part-i/

[10] “Sophisticated Indicators for the Modern Threat Landscape: An Introduction to OpenIOC" by OpenIOC, Last Visited 6 June 2013.
http://openioc.org/resources/An_Introduction_to_OpenIOC.pdf

[11] “OpenIOC: An OpenFrameworkfor sharing Threat Intelligence," OpenIOC, Last Visited 6 June 2013.
http://www.openioc.org/

[12] “The Incident Object Description Exchange Format," by Danyliw, Meijer and Demchenko, Network Working Group, IETF, Last Visited 6 June 2013.
http://www.ietf.org/rfc/rfc5070.txt

[13] “Practical Malware Analysis," by Kris Kendall, Mandiant, Blackhat 2007, Last Visited 6 June 2013.
http://www.blackhat.com/presentations/bh-dc-07/Kendall_McMillan/Paper/bh-dc-07-Kendall_McMillan-WP.pdf

[14] “Foreign Attacks on Corporate America," by Kevin Mandia, Mandiant, Blackhat 2006, Last Visited 6 June 2013
http://www.blackhat.com/presentations/bh-federal-06/BH-Fed-06-Mandia.pdf

[15] “Indicators of Compromise Entering the Mainstream Enterprise?" by Lenny Zeltser, Lenny Zeltser on Information Security, 7 March 2013, Last Visited 6 June 2013
http://blog.zeltser.com/post/44795789779/indicators-of-compromise-entering-the-mainstream

[16] “2013 Cyber Security Disruptors,” by iDefense, Jan 2013, Last Visited 6 June 2013
http://www.verisigninc.com/assets/idefense-disruptors-wp.pdf

[17] “Inside the Chinese Hack Attack" by Nathan Thornburgh, Time Magazine, 25 August 2005, Last Visited 6 June 2013
http://www.time.com/time/nation/article/0,8599,1098371,00.html

[18] “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains," by Hutchins, Cloppert, Amin, Lockheed Martin, October 2011, Last Visited 6 June 2013
http://papers.rohanamin.com/wp-content/uploads/papers.rohanamin.com/2011/08/iciw2011.pdf

[19] “APT1: Exposing one of China’s Cyber Espionage Units,” Mandiant, 2013.

[20] “Executive Order on Cybersecurity ... PDD 63 Deja Vu," by Warren Axelrod, BlogInfoSec.com, Information Security Magazine, 9 April 2013, 
Last Visited 6 June 2013.
http://www.bloginfosec.com/2013/04/09/executive-order-on-cybersecurity-pdd-63-deja-vu/

[21] “Executive Order – Improving Critical Infrastructure Cybersecurity,” by President Obama, the White House, 12 February 2013, Last Visited 6 June 2013
http://www.whitehouse.gov/the-press-office/2013/02/12/executive-order-improving-critical-infrastructure-cybersecurity

[22] “STIX: Structured Threat Information eXpression,” by Mitre, Last Visited 6 June 2013 
http://stix.mitre.org/

[23] “TAXII: Trusted Automated eXchange of Indicator Information,” by Mitre, Last Visited 6 June 2013 
http://taxii.mitre.org/

[24] “Defense Industrial Base (DIB) Cyber Security/Information Assurance (CS/IA) Activities," Assistant Secretary of Defense for Networks and Information Integration / DOD Chief Information Officer, 19 January 2010, Last Visited 6 June 2013 
http://www.dtic.mil/whs/directives/corres/pdf/520513p.pdf

[25] “Defense Industrial Base: CriticalInfrastructure and Key resources Sector-Specific Plan as Input to the NAtional Infrastructure Protection Plan," Homeland Security, Department of Defense, May 2007, Last Visited 6 June 2013 
http://www.dhs.gov/xlibrary/assets/nipp-ssp-defense-industrial-base.pdf



Comments

Post a Comment

Popular posts from this blog

Books You Should Have Read By Now

When I started Terebrate back in January 2010, I always intended it to be a place to put my book reviews on whatever I was reading. Since then, a lot has happened in my professional life. I changed jobs, twice. I presented my collection of cybersecurity book reviews at the annual RSA Conference and suggested that the cybersecurity community ought to have a list of books that we all should have read by now. My current employer, Palo Alto Networks, liked the idea so much that they decided to sponsor it. We ended up creating the the Rock and Roll Hall of Fame  for cybersecurity books. We formed a committee of cybersecurity experts from journalists, CISOs, researchers and marketing people who were all passionate about reading. My collection became the the candidate list and for the past two years, the committee, with the help of community voting, has selected books from the candidate list to be inducted into something we are calling the Cybersecurity Canon. It has be

Book Review: The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage (1989) by Clifford Stoll

Executive Summary This book is a part of the cyber security canon. If you are a cyber security professional, you should have read this by now. Twenty years after it was published, it still has something of value to say on persistent cyber security problems like information sharing, privacy versus security, cyber espionage and the intelligence dilemma. Rereading it after 20 years, I was pleasantly surprised to learn how pertinent that story still is. If you are not a cyber security professional, you will still get a kick out of this book. It reads like a spy novel, and the main characters are quirky, smart, and delightful. Introduction The Cuckoo’s Egg is my first love. Clifford Stoll published it in 1989, and the first time I read it, I devoured it over a weekend when I should have been writing my grad school thesis. It was my introduction to the security community and the idea that somebody had to protect these new-fangled gadgets called computers. Back in those days, author

Book Review: Kingpin: How One Hacker Took Over the Billion-Dollar Cybercrime Underground by Kevin Poulsen (2011)

Executive Summary Kingpin tells the story of the rise and fall of a hacker legend: Max Butler. Butler is most famous for his epic, hostile hacking takeover in August 2006 of four of the criminal underground’s prominent credit card forums. He is also tangentially associated with the TJX data breach of 2007. His downfall resulted from the famous FBI sting called Operation Firewall where agent Keith Mularski was able to infiltrate one of the four forums Butler had hacked: DarkMarket. But Butler’s transition from pure white-hat hacker into something gray—sometimes a white hat, sometimes a black hat—is a treatise on the cyber criminal world. The author of Kingpin , Kevin Poulsen, imbues the story with lush descriptions of how Butler hacked his way around the Internet and pulls the curtain back on how the cyber criminal world functions. In much the same way that Cuckoo's Egg reads like a spy novel, Kingpin reads like a crime novel. Cyber security professionals might know the