Sunday, November 7, 2010

Test test

This is just a check for a script.

Wednesday, June 16, 2010

AT&T is Wrong About the iPad Breach & I have code to prove it

When I'm not working on Hamilton Carver: Zombie PI the web series, I've been known to do infosec research. Hell, it is still my day job.

So, at this point I assume everyone knows about the iPad Breach.

Originally AT&T stated that the disclosure of emails was the only issue from the information breach. Chris Paget soon reasoned out that this wasn't the case since American GSM vendors actually construct their Integrated Circuit Card identifier(ICC-ID)s to correspond to their International Mobile Subscriber Identifier(IMSI)s.

Thanks to a little bit of work between Ian Langworth and myself we now have a tool that takes advantage of this ICC-ID to IMSI correspondence. It also spits out a little bit of information about the ICC-ID as well. The tool can be found here but is likely to move. So if the link is dead, I'm probably moving it to new hosting. I'll make sure to update this post with the new address.

In Chris' post he describes the scary things you can do with an IMSI and links to a paper that explains how AT&T and T-Mobile ICC-IDs can be converted to IMSIs. There's just one catch, the paper is incomplete. Their method for AT&T / Cingular doesn't work. This bothered me, even more so after watching many people quote the paper on Slashdot and other sites. In addition to not trying the algorithm there was some confusion as to whether or not the ICC-IDs could correspond to the IMSIs, so I built the tool just to show definitively in this case it works.

So how do you convert an ICC-ID into an IMSI?

Basically the way you decode an AT&T or T-Mobile ICC-ID is like this.

  1. Read off the first 2 digits as the system code (all the ones we care about start with 89 for GSM)
  2. Read the ITU dialing prefix out of the next few digits this will be the next 2 - 3 digits. Make sure to match the longest prefix first.
  3. After parsing out the ITU prefix, parse the next three digits as the Mobile Network Code(MNC)
  4. Match the MNC with the ITU prefix country to find the Mobile Country Code (MCC).
e.g. if I have an ITU prefix of 01 meaning the US, I then look at the list of MNCs for that country and find the corresponding MCC to get the MCC for the IMSI
  • So at this point we have the MCC, the MNC, and are only missing the subscriber number to form an IMSI getting the subscriber number out of an ICC-ID is vendor specific so it's different between AT&T and T-Mobile
  • To get an AT&T subscriber number simply take the next 9 digits after the MNC
ex: if I have a ICC-ID of 89014101234567891 the subscriber number is 123456789
  • To get a T-Mobile subscriber number you need the take the two digits before the double 00 and concatenate them with the seven digits following the zero
ex: if I have an ICC-ID 0f 8901260390012345679 the subscriber number is 391234567

Honestly this is pretty lame technically but it proves the point that ICC-ID disclosure is equivalent to an IMSI disclosure for AT&T and T-Mobile. In case anyone is wondering, yes we've checked the derrived IMSI values against the true IMSI values with OpenBootTS, and the USRP.

-Pete

Tuesday, March 23, 2010

Slides from ShmooCon

So this has been way overdue, but here are my slides from my ShmooCon presentation, Ring -1 vs. Ring -2: Containerizing Malicious SMM Interrupt Handlers on AMD-V.

Honestly, I'm a little disappointed in myself for the presentation. I think it went well but I didn't really have time to make the talk as accessible as I would have liked, also this was my first time presenting at a conference so usual jitters apply, hopefully I'll get the chance to do it again.

A little explanation: The goal of this project was to determine how to protect a virtual machine monitor from a malicious System Management Mode Interrupt handler.

When I started working for Crucial Security my main focus was on building a hypervisor that was able to isolate a process from a malicious OS, the idea being that we wanted to ensure that even if an attacker got access to a server they'd still have to exploit the specific service to gain access to its data. During development we kept discovering new avenues of attacking the VMM, one of the most damaging ways was to use a malicious SMI handler as was done in Rafal Wojtczuk and Joanna Rutkowska's presentation on Attacking Intel's TXT.

The defense against their attack that Intel responded with was that the VMM developer should run the SMI handler inside of a virtual machine. Joanna's counter point was that, no one has good documentation on how to do that. Since I was developing a tiny VMM for security, I took that challenge and on the AMD-V platform developed a hypervisor that could run a limited SMI handler inside of a virtual machine as a PoC.

Unfortunately almost all of my time was spent developing and debugging the PoC, with less attention than I would have liked going to the actual presentation. Anyway I still need to get the source off of back ups since my corporate laptop died. However it should be up by the end of May. Comments and feed back are greatly appreciated.

My hope is that this helps VMM developers get a better idea in what's involved with isolating an SMI handler and in rare cases where that level of paranoia is needed, that my work can help other people find the information they need more quickly.

Wednesday, March 10, 2010

Content Finally

As people following along via RSS may have noticed I'm finally putting some of my old blog articles into this blog. The process has been really enlightening, some of the posts are embarrassing as I find old promises, to write articles that would have been a lot of fun to research.

The nice part about this is that I can follow through on my promises since I never put a date on them. Anyway my personal goal is to start blogging again as it's both cathartic and a good means of keeping notes for research.

Cheers,

-Pete

How'd You Get Into Security?

One of the most important things in a professionals career is their sense of what is and isn't possible. The caveat here is that this is often colored by your perception of what is and isn't easy to do, in other words your experience. In a recent conversation with Russell, this topic came up in the conext of information security professionals. As the conversation progressed two points stuck out in my mind, 1. that I came to security from a development background, 2. that it may not be as common as I thought.



Before I'd ever thought about networks, security or anything else related to information security, I'd learned C, and pascal in high school. I'd been doing basic bat scripting since my grandfather bought my family their first computer in 1993. Until I went to Northeastern University and majored in computer science my interest in the computer was primarily in figuring out how the box worked at the computer architechture level which I dabbled with on again and off again.



So how'd I actually get into this field? It was the summer of 2001, it was a hard summer for finding co-op assignments. For those of you not familiar with co-ops, it's basically a stint where you work in your field sort of like an internship. I'd already done some minor contract work with the chemistry department to write a visual basic program (eww in hindsight) to do some post processing of mass spectrometer results. I'd landed a job working as a JSP developer building a web based college advising system and things started to go wrong when a fellow student (Jon) walks in and says I've hacked your database.



Before I get into recounting how we were completely owned and what I did about it. Let me give you some background on the system. I was roughly the sixth developer, had never touched apache, tomcat, MS-SQL or JSP before this. It was a good learning experience. Though the project was doomed from the start. Too many developers had worked on it and then left and then in the middle of working on the project the professor and his two graduate students (who basically only spoke chinese which I can not speak) left. Leaving me with a collection of JSP pages and db tables that had each made sense to one of the previous devs.



Jon had found numerous SQL injeciton bugs which since we were using MS-SQL gives him shell access. What's worse is that this was a college advising database that of course contained student id numbers and contact informaiton. Like most universities Northeastern used social security numbers for student ids, game over. I spent the better part of a week building a really simplistic filtering library, going back and forth with Jon until he was unable to compromise the box via SQL injection. Looking back I'm sure there was XSS and enough other vectors to make everyone involved cringe in horror.



As expected, the web app never really got off the ground and it left me rather disappointed. The experience of potentially losing 400+ students social security numbers was too much for me, it made me realize that if I'd wanted to continue in development I'd need to learn about security and I've been doing that ever since.



So how does this relate to my original point? I like to build things, I was a developer. Therefore I look to build things when possible. It colors the decisions you're likely to make.



So how'd you get into information security?

ISC's Four Methods of Decoding Javascript + 1

After reading CGISecurity today I was pointed to an article over at the ISC. It listed four methods of decoding javascript as illustrated in the following table.

Note: The table values are taken from Daniel Wesemann's ISC post


















MethodDescriptionProsCons
The Lazy MethodEdit your copy of the hostile HTML so that it only contains the necessary HTML headers and the Javascript you're interested in. Then hunt down all occurences of "document.write" and "eval" inside the Javascript and replace them with "alert". Copy the modified file onto a web server of yours, or to some other place from where you can easily open it with a web browser, which should make the decoded JavaScript appear inside one (or several) pop-up "alert" windows.Quick and easy to accomplish Usually only decodes one (the first) encoding stage. Don't be disappointed if you get the next level of gibberish in your alert pop-up.
The Tom Liston MethodThe idea is that you replace occurences of eval(stringVal) and document.write(stringVal) with document.write("<textarea rows=50 cols=50>");document.write(txt); document.write("</textarea>");
Quick and easy to accomplish, and in case the textarea reveals another stage of encoded Javascript, this method allows for easy cut-and-paste to continue the decoding.Careful with typos. If you have a typo in the leading textarea definition, the following "document.write(txt)" will go right to the browser, as it originally would have, and the exploit will execute.
The Perl-Fu Method Try to make sense of the Javascript decoding routine, and then re-create it with a short code block in PERL.Very easy and fast for use on the dumber encoding methods like XOR, Caesar ciphers (character permutations), etc. Also the "safest" method, as this approach alone does not actually execute the hostile code.You have to speak Perl and be able to translate the Javascript decoding into Perl. Much too tedious an approach for very convoluted Javascript, or JavaScript codes using functions which are hard to translate into Perl (like the arguments.callee codes seen frequently in fall 2006)
The Monkey Wrench MethodUse the stand-alone Javascript interpreter "SpiderMonkey" to run the encoded Javascript block. Replace the document.write(txt) with print(txt) before doing so, SpiderMonkey doesn't have any document object by default.Little hassle, good results, fast method to get around a hard "outer shell" of a Javascript block encoded multiple times, works well in combination with the Perl-Fu method.Fails for Javascript code deliberately written to only uncompress on Internet Explorer.


First a few quibbles about the list. One the Perl-fu method is really about translating javascript into your language of choice. It's a bit difficult as it doesn't handle interaction with the DOM. The monkey wrench method has some trouble with browser specific interpreter issues such as handling subscript notation.



So my little addition to this for javascript is the use of firebug. Firebug in case you've missed it is a javascript dubugger in the form of a Firefox extension. This allows you to among other things set breakpoints and single step through javascript lines. The method breaks down like this:




  1. Put the javascript in an HTML page, we'll call a test page

  2. Modify the javascript to have a debugger statement at the start of the javascript.

  3. Make sure firebug is enabled

  4. Load the test page in Firefox

  5. Now set breakpoints where you want and single step



Since I copied the pros and cons from the ISC list I figure I'd better contribute my own row for the firebug method.










MethodDescriptionProsCons
Firebug MethodAdding a debugger statement to the start of the javascript, and then single step your way through the code setting break points to speed up the work. Stopping to examine the DOM. etc. This allows you to examine the code for side effects like manipulating the dom. Gives you the power of a debugger.Does not work with browser specific javascript except for Firefox. Can be slow if you just single step your way through.


For example if I have a sample file:




<html>
<script>
while(1){
alert('fooo');
}

</script>
<body>

<body>
<html>


I can simply add a breakpoint before the alert call by adding the statement


<html>
<script>
while(1){
debugger;
alert('fooo');
}

</script>
<body>

<body>
<html>


This drops me into firebug at the debugger line. I can now single step my way through the code and inspect the DOM in the GUI.



For more information on using Firebug check out this video here

A Short Week And See You At Shmoocon

So this week is a short week for me while since I'll be attending Shmoocon in Washington DC. If you're attending Shmoocon or just for hanging out in DC drop me an email..