Here are the slides from my 614con talk about the 1996 investigation.
— Steve
Here are the slides from my 614con talk about the 1996 investigation.
— Steve
We found a secret message:
b jmrz wjzoz wjfyujwo, of fvwzh b fyujw wf czxqmtz wjmw oqfw nbwj njmw b fhtz gfyujw 'tmyoz oflzgfip owfqz lp tmc cmibf mhi hfn b ayow obw bh obqzhtz. obwwbhu bh obqzhtz ubrzo lz wjz wblz wf mii wf wjzoz qpcbto mhi qzmi pfy wf uf wf uf ifw foy ifw ziy oqmoj twu wjczz njbtj lmezo vfc m czmqqp gmi cjplz. ofccp! ayow gz uqmi nz ibih’w obhu bw…
It seems to be encrypted with a “substitution cipher”, meaning that someone changed each letter for some other letter. For example, “secret” might be “bxtvxw” if someone switched b for s, x for e, t for c, v for r and w for t. This would be hard to guess, maybe, but you can usually make some good guesses for longer messages if you look at what letters occur most frequently – these usually match up with the most frequent letters in normal english. You should also look at things like short words (a, I, of, so, to, at, the, was, saw…) and patterns (doubled letters in the “cipher text” will be doubled in the “plain text” – “zwwp” might be “look”, “book”, “tool” and so on)
These aren’t too hard to break, especially if you have some patience and good tools to help you.
You can use http://www.cryptoclub.org/tools/cracksub_topframe.php to help get the job done – good luck! Paste the message from above into that site, click the “crack” button, and have fun. The frequency table might help with some of your guesses for the most common letters. Enter your guesses in the key above: if you think that “A” in the cipher text is actually “B” in the plain text, then put “A” beneath “B” in the key – your translation will appear below.
The Gateway Film Center is having a special showing of Zer0 Days on July 17th followed by a panel discussion that I’m participating in. This is a movie about StuxNet, the malware that was unleashed to try to set the Iranian uranium enrichment program back. See their Facebook page for details…
— Steve
The problem’s introduction reads:
Monty Hall wrote a script of how he was supposed to run one of his game shows for his trusty accounting computer some time ago, but hes not really sure what the punch cards mean any more. I mean, that was a while ago. Only, hes sure his key is hidden somewhere in these punch-cards, if he could figure out how to run them… : 150
and they provide a tarball of PNG files that appear to be punch cards.
You could translate these manually, but a bunch of people have implemented their own punch card readers, and I found a nice python script that might read the images. However, it seems to want them in inverted colors (white holes, not dark). I inverted them with Gimp (color->invert) and then ran the script. The script seemed to work well, but it missed the first column in my case. Note that you can use the “-i” and “-d” options to the script to help your debugging efforts. There’s probably some way to adjust the script to catch the first column, I didn’t bother to look into that.
Running the script across all the cards in numerical order yields this text:
DENTIFICATION DIVISION. PROGRAM-ID. LETS-MAKE-A-DEAL. AUT HOR MONTE HALPARIN. ATA DIVISION. WORKING-STORAGE SECTION. 01 DOORCHOICES. 02 GOODDOOR PIC 9. 02 FIRSTCHOICE PIC 9. 02 OPENDOOR PIC 9. 02 C NGEDOOR PIC 9. 01 CURRENTDATE. 02 CURRENTYEAR PIC 9(4). 0 CURRENTMONTH PIC 99. 02 CURRENTDAY PIC 99. 01 DAYOFYEAR. 02 CURRENTMONTH FILLER PIC 9(4). 02 YEARDAY PIC 9(3). 01 URRENTTIME. 02 CURRENTHOUR PIC 99. 02 CUR RENTMINUTE PIC 99. 02 CURRENTTENS PIC 9. 02 CURRENTONES PIC 9. 02 FILLER PIC 99. PROCEDURE DIVISION. DISPLAY 'MH: WELCOME TO L ETS MAKE A DEAL'. D PLAY 'MH: THERE ARE THREE DOORS. ONLY ONE WITH THE KEY' . ACCEPT CURRENTTIME OM TIME. IF CURRENTONES < 4 SET GOODDOOR TO 1 ELSE IF CURRENTONES < 7 SET GOODDOOR TO 2 ELSE SET GOODDOOR TO 3 END-IF END-IF DISPL 'MH: YOU MAY ONLY OPEN ONE DOOR. WHICH DOOR?'. IF CURR ENTTENS = 0 OR CURR TTENS = 3 SET FIRSTCHOICE TO 1. IF CURRENTTENS = 1 O R CURRENTTENS = 4 T FIRSTCHOICE TO 2. IF CURRENTTENS = 2 OR CURRENTTENS = 5 SET FIRSTCHOICE O 3. DISPLAY 'PLAYER: I PICK DOOR ' FIRSTCHOICE '.' IF FIRSTCHOICE = GOODD R DISPLAY 'MH: THAT IS AN INTERESTING CHOICE OF DOOR .' IF CURRENTTENS R 0 OR CURRENTTENS = 4 SET OPENDOOR TO 3 END-I F IF CURRENTTENS = OR CURRENTTENS = 5 SET OPENDOOR TO 1 END-IF IF CURRENTTENS = 2 OR CURRENTTENS = 3 SET OPENDOOR TO 2 END-IF DISPLAY 'MH: LET GIVE YOU A HINT.' DISPLAY 'MONTY HALL OPENS DOOR ' OPENDOOR DISPLAY ' GOAT RUSHES OUT WITH NO KEY.' DISPLAY 'MH: WOULD YOU LIKE TO CHANGE YOUR GOOR CHOICE?' DISPLAY 'PLAYER: YES! MY LOGIC MINOR I N COLLEGE HAS A USE!' GOOR IF CURRENTTENS = 2 OR CURRENTTENS = 4 SET CHANGEDOOR TO 1 D-IF IF CURRENTTENS = 0 OR CURRENTTENS = 5 SET CHANGEDOOR TO 2 E -IF IF CURRENTTENS = 1 OR CURRENTTENS = 3 SET CHANGEDOOR TO 3 EN IF DISPLAY 'PLAYER: I WILL CHOOSE DOOR ' CHANGEDOOR ' INSTEAD!' ELSE ET CHANGEDOOR TO FIRSTCHOICE. IF CHANGEDOOR = GOODDOOR DISPLAY 'MH: CONGR ETULATIONS! YOU FOUND A KEY.' DISPLAY 'MH: THE KEY I S:' DISPLAY 'KEY ETALEXTREBEKISASOCIALENGINEER)' ELSE DISPLAY 'MONTY HA LL OPENS THE DOOR. GOAT JUMPS OUT.' DISPLAY 'MH: THIS IS THE INCORRECT DOOR.' DISPLAY 'TH GOAT EATS YOUR PUNCH CARDS. START OVER.'. STOP RUN.
That’s (broken) Cobol, yuck, but it looks like we got most of the content. All that remains is to find the key and fill in any missing details. If you read through the code, you’ll see line 29/30 talks about the key. Looks like “KEY ETALEXTREBEKISASOCIALENGINEER)” but its missing the first character of one of the lines. No worries, you can fill it in manually using a conversion chart.
My by-hand conversion of those two cards has the key as “(SETALEXTREBEKISASOCIALENGINEER)” but that doesn’t seem to work as an answer.
Chris tried ALEXTREBEKISASOCIALENGINEER, which is indeed the key.
This started a trend of me almost finishing problems and other people figuring out what the correct key for the problem was.
That’s why you work in a team, right?
Problem intro:
During my time at KGB I learned how to hide all the stuff from alpha-dog. But damn it, I somehow lost some of the most important files… : 100
They provide what appears to be an ext3 file system image. Just for grins and because its easy, I ran “foremost” (a file carving tool) to see what it would find. It only found one file, a zip file. And that is encrypted and we don’t know the password. So that got me thinking that I would need to examine the file system for clues about the password.
I also ran “strings” on the disk just to see what would pop up, and it turns out there were a ton, and they mostly seemed to be notes about various spy agency goings-on.
Nothing jumped out as a password, so I started mining the strings from the image to try to guess the zip file password. Of course, the password could be something that you’d have to read and figure out, or it could be something that isn’t an ASCII string, or it could be encoded in some way… so I gave up on that in favor of doing some more forensics work.
I decided to use a forensics suite called “dff” to do the work. I really like dff. In the course of examining the file system, there were obviously hundreds of files named “secret123”, “secret124” and so on. Many contained short text strings (found above), some were empty, some were deleted. I resigned myself to reading them, when I noticed a file named “.secretXXX” (I forget the actual name). Two things were interesting about this. The first is that its name starts with a period: in the Unix world that’s a signal that the file should be “invisible” by default. This is often used to hide things (though its trivial to find them). The other item of interest is that “dff” identified the file type as “KGB Archiver”. I thought “wtf?! how could dff know about a file type that appears to have been made up for this scenario?” It wasn’t, of course: turns out this is an actual file compression program.
Downloaded it, installed it in a Windows VM, ran it on the file – the key was right there.
The encrypted zip file was a red herring…
Description:
omg tha NSA hacked my super secret login, I caught them exfillin this pcap, am I t3h fuxxed? : 200
And they provide a pcap file.
Viewing the file with pcap, you quickly discover that its a recording of USB traffic. Who knew? 🙂 A little googling revealed some info about that, and a nice set of scripts to work with USB pcaps.
The pcap just contains traffic for a mouse. The protocol is pretty simple, and I’ll leave it to you to research it. But the main thing to know is that in this case the mouse is being polled and in the data it sends to the host there are 4 bytes: 8 bits of button data (button one on/off, button two on/off etc), 1 byte each for x, y and wheel delta (2’s complement). Sometimes button 1 has been pressed, but mostly not.
I thought that maybe I could use the scripts I found to replay or view the traffic – that might be possible, but probably isn’t a good way to solve the problem. I tried that for far too long before I gave up to pursue simpler ways to visualize the traffic.
I first wanted to see what the mouse motion was all about – maybe they were drawing a picture? I exported the data from the pcap to a text file (pcap-data.txt) and wrote a script to convert that into a simple Postscript file to display the mouse motion. What’s that look like to you? Its a sideways keyboard – you can see that there are these “foci” at regular spaces, 10 in one row, then 9, then 7 and a wider area at the right which would be the space bar. This is a recording of someone typing on a virtual keyboard.
To get the message, I rewrote my script to keep track of the current x, y coordinates and to output a data record with the coordinates and a incrementing sequence number whenever we see a button press in the data (only button one ever gets pressed). Then I plotted the results with gnuplot, which is incredibly useful, btw.
Sorry, everything is upside-down, but that’s OK.
Some parts of the keyboard are too busy with overlapping numbers to be able to read them. So I split the data file into pieces and only viewed 15-20 at a time. But then you have the problem that its hard to make out exactly where the keys are. So I scaled everything to the same scale, viewed the diagram above, and marked the key locations with a whiteboard marker on my screen. Then I could view the data sets with 15-20 key presses and transcribe what letters were being typed. The message I got was “THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG THEKEYISIHEARDYOULIKEDSKETCHYETCHINGGLASTYEAR”. There was a problem in last year’s BKP named “Sketchy”, and “Etchy-Sketchy” is slang for “dodgy or uncertain”, so this all makes sense. The double-G doesn’t make sense, but after examining the data that appears to be a case where the mouse button had been held down for a long time and appears in two polling periods so shows up twice. But I couldn’t get the site to accept the key.
I hate using the shift key. I frequently type text lowercase and then have to go back and painstakingly upper case characters as needed. I don’t know why I chose to enter the text as uppercase, but I did. The key was lower case – John was the one who figured that out.
I thought it would be fun to post an abbreviated post-mortem we did on a compromised computer from a few years ago. I’ve tried to strike a balance on detail so that a variety of audiences could understand this, but I’m sure the more technical among you would like more details (ask me, I’ll share) and this abbreviated version might even be too detailed for others. My main reason for posting this is to talk about perspectives on compromised computers and anti-malware software, but we could also talk about forensic techniques, log analysis, timeline reconstruction and so on. I think I’ll save the detailed forensic, log analysis, and timeline discussion for a later post.
Let’s start with the results of the analysis, which you can find at Incident Analysis (shortest). This is an abbreviated version of a timeline analysis of a compromised computer from a few years back – I’ve anonymized the data but kept the timestamps.
In an investigation like this we typically construct a timeline composed of chronological data pulled from a variety of sources: file system metadata, log files from the affected system and other systems, network traffic logs and so on – anything that might be relevant that could help reconstruct the chronological history surrounding the event in question (this is what forensic specialists are calling a “super timeline” these days, we didn’t used to have a special name for it :-). This original data might consist of millions of events (or more, sigh). In this case I focused mostly on evidence from the local system and our IDS logs over a relatively short slice of time (a few hours). When you make maple syrup you boil 40 gallons of sap to make a gallon of syrup. In a similar fashion, when you do forensic timeline reconstruction you “boil” hundreds of thousands of events down to a few thousand events, and might even take it further – in this case this is the 3rd “boil” from my original set of events, and we’re down to what’s essentially hand written commentary about what I observed from the logs from the 2nd boil, as it were. The “big picture” you should be looking for from the analysis I linked to above is that a computer got infected through vulnerable software, and this led to a cascade of malware being installed on the system. Some of this malware was caught and stopped by the anti-malware software but most was not, and most importantly, the main downloader was not. This ultimately led to the computer sending at large quantities of spam. One question that I wanted to address is the question of how we should view things when our anti-malware software blocks something bad. I think its fairly common for people to think “good, my anti-malware software just saved me from that malware” and in some cases, that might be correct. But I’ll make the following observations…
Anti-malware software doesn’t catch everything. Anti-malware software on this computer obviously detected some things in this example but it completely missed others. The miscreants know this, and actively seek to avoid detection. Fresh malware is typically not detected by many anti-malware products, and it often takes days or weeks for the anti-malware vendors to start detecting it.
Just because your anti-malware product detected and blocked/cleaned/deleted/quarantined something doesn’t mean that you’re safe. I think its best to regard anti-malware as a detection mechanism rather than as a preventative measure. Yes, with luck, sometimes (most of the time? some of the time? rarely?) it will prevent malware from taking root on your machine. But if it detects and blocks something how do you know whether it missed anything? Did it catch the first stage downloader or some later stage? If it missed the first-stage, what else did it miss?
I’m not arguing that anti-malware is useless. Obviously some things were detected by anti-malware, and well before we noticed anything at the network level through network based intrusion detection. I’m a belt-and-suspenders sort of security guy, I think its prudent to use multiple layers of detection/prevention. Defense in depth, etc.
Reinstallation is better than disinfecting. Good luck cleaning up from something like this. How would you know whether you got everything? Granted in some cases its possible, but (a) how do you know you’ve gotten everything when you’re likely using the compromised system to investigate itself and do the fixing (go read Thompson’s “Reflections on Trusting Trust” 🙂 and (b) for the sake of “plain old” disaster recovery (disk failure, theft, building caught fire, etc) you probably should have some sort of capability for quickly restoring systems in the event of a disaster – and if you do you can probably restore a system quite quickly and painlessly. If you’re disinfecting rather than reinstalling b/c reinstalling is too hard, I’d suggest that you need to work on your disaster recovery procedures. Apart from ensuring that you’ve removed all of the malware, you also have to worry about the configuration changes that the malware might have made to the system – did you find and fix all of those?
The missing piece: finding the root cause(s) and fixing the exploitable vulnerability. In this case the system had a vulnerable version of Adobe Reader installed and that this was used to perform the initial infection via malicious PDF files. But I don’t think its common for people to do much of a root cause analysis on compromised computers – you frequently hear them talking about disinfecting or reinstalling them, but if no RCA is done and the initial point of entry is unknown, they’re just a target waiting to be infected again. Was it a missing patched? An old version of Java left behind when you updated it? Was the computer user browsing the web using an account with administrative privileges? Did they fall for a tech support phone scam? Find the exploitable vulnerability and fix it (and fix it on the other affected systems as well)!
I’ve got a number of papers that I typically share with students in my class. I’ve selected these because I think they’re interesting, not necessarily because they’re the most current on the various topics. I gather these from a variety of sources including Usenix (I’m a huge Usenix fan, though I haven’t been able to attend any of the conferences lately), DefCon and BlackHat. There are also a number of authors I stalk, er, track. One of them is Vern Paxson – you’ll see that several of the papers below have his name on them.
“Measuring Pay-per-Install: The Commoditization of Malware Distribution“, by Juan Caballero, IMDEA Software Institute; Chris Grier, Christian Kreibich, and Vern Paxson, University of California, Berkeley. This talks about the ways that miscreants can pay for installation of malware.
“The Nuts and Bolts of a Forum Spam Automator” by Youngsang Shin, Minaxi Gupta, Steven Myers, School of Informatics and Computing, Indiana University discusses a highly automated forum spam automator. I get a chuckle out of thinking of competing automated systems posting spam to web forums in response to each other’s postings, and of automated systems trying to detect the same and remove the spam and block the posters…
This one is fun: “SkyNET: a 3G-enabled mobile attack drone and stealth botmaster“, by Theodore Reed, Joseph Geis and Sven Dietrich, all of the Stevens Institute of Technology. Follow up by watching the Terminator movies… 🙂
“An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants” by Jason Franklin (Carnegie Mellon University), Adrian Perrig (Cylab/CMU), Vern Paxson (ICSI), and Stefan Savage (UC San Diego) discusses how miscreants on the Internet get their $$. Great paper, must read! The title is a play on the title of a book by Adam Smith: “An Inquiry into the Nature and Causes of the Wealth of Nations“.
“The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments” by Peter A. Loscocco, Stephen D. Smalley, Patrick A. Muckelbauer, Ruth C. Taylor, S. Jeff Turner, and John F. Farrell (all of the NSA) argues that the security of modern systems depends on having secure operating systems. Which we (still) mostly don’t have.
“Manufacturing Compromise: The Emergence of Exploit-as-a-Service” by Chris Grier (UC Berkeley), Lucas Ballard (Google), Juan Caballero (IMDEA), Neha Chachra (UC San Diego), Christian J. Dietrich (University of Applied Sciences Gelsenkirchen), Kirill Levchenko (UC San Diego), Panayiotis Mavrommatis (Google), Damon McCoy (George Mason University), Antonio Nappa (IMDEA), Andreas Pitsillidis (ICSI), Niels Provos (Google), M. Zubair Rafique (IMDEA), Moheeb Abu Rajab (Google), Christian Rossow (University of Applied Sciences Gelsenkirchen), Kurt Thomas (UC Berkeley), Vern Paxson (UC Berkeley, ICSI), Stefan Savage (ICSI) and Geoffrey M. Voelker (UC San Diego) (whew!) investigates the use of browse drive-by infections in the underground economy.
“What’s Clicking What? Techniques and Innovations of Today’s Clickbots” by Brad Miller (UC Berkeley), Paul Pearce (UC Berkeley), and Chris Grier (UC Berkeley, ICSI), Christian Kreibich (ICSI), and Vern Paxson (UC Berkeley and ICSI) talks about click-bots – used to conduct click fraud. Wondering what that is? Read!
“Insights from the Inside: A View of Botnet Management from Infiltration” by Chia Yuan Cho (UC Berkeley), Juan Caballero (Carnegie Mellon University and UC Berkeley), Chris Grier (UC Berkeley), Vern Paxson (UC Berkeley, ICSI), and Dawn Song (UC Berkeley) explores the internal workings of the MegaD botnet, which they infiltrated.
— Steve
Helen recently pointed us to a Palo Alto blog posting on “security must reads”. I thought the Palo Alto post was interesting, but while I certainly encourage reading (whether for work or for fun), some of their entries seem a little goofy.
For example, I like William Gibson, and “Neuromancer” is one of my favorite books – I can’t tell you how many times I’ve read it. Many. I like “Mona Lisa Overdrive” even better. But a “must read” for a security professional? Hmmm…
I do have a list of “security papers must reads”, though. I was going to post this big long list, but I think I’ll start with some of my all-time general favorites and save the rest for later.
At the top of my list is Ken Thompson’s “Reflections on Trusting Trust“. This is a classic that I think really helps frame some of the challenges in Information Security. I’ll leave it to you to read the paper (its short and fairly easy to follow) and have your own “aha” moment.
Another “formative” paper for me was Robert Baldwin’s dissertation “Rule Based Analysis of Computer Security“, 1987. He describes (and implemented) an AI based system for analyzing the security of Unix systems. I was exposed to this through using Dan Farmer’s “COPS” software (see “The COPS Security Checker System“), which includes a modified version of Baldwin’s Kuang software. The name “Kuang” comes from “Neuromancer“, by the way, so perhaps I should rescind my comment about it not being a “must-read” above 🙂 The basic gist of Baldwin’s paper was to use AI techniques like backward chaining using the current system state and rules that describe the security model for a system to see whether there were ways to reach certain goals (such as “become root”) from a given start state (“I’m logged in as a non-root user”.) I spent some time using and trying to improve the software, which was very instructive for me. Kuang led to the development of other systems, like NetKuang. I frequently wonder what the future of AI and Information Security is, especially in these days of “Big Everything”…
I’ll also list Dan Geer’s essays on monocultures, especially “Cyberinsecurity: The Price of Monopoly“. I don’t think people think about this enough (or about separation of “trust domains”). There’s an attraction to the scalability of monocultures: I know from experience that its a lot easier to manage a few platforms rather than dozens, and its a lot easier to manage hundreds/thousands of systems if they’re all cut from the same pattern. But if something bad happens, it could happen to all of them at the same time. Oh, check this out also: “Heartbleed as Metaphor“, along the same theme…
Last one I’ll mention today is Bill Bryant’s “Designing an Authentication System: A Dialogue in Four Scenes“. This presents a fictional account of the design of Kerberos, one of the cornerstone’s of MIT’s Project Athena which has become one of the foundations of authentication systems across the Internet. If you’ve ever wanted to understand why authentication systems are designed the way they are, or why they are so hard to get right, but don’t want a uber-technical treatment of the subject THIS is the paper for you!
— Steve
Finally getting around to setting up my u.osu site…
Just created a site for the OSU CTF group – check it out!
— Steve