Archive for April, 2010

ShmooCon 2009 Badge Contest

April 27, 2010 Leave a comment

ShmooCon is a great security conference, held early each year in Washington, D.C. They frequently feature a puzzle contest connected to the conference badges. In 2006, the badges were die-cut pieces of metal that could all fit together to create one large badge. Renderman figured that one out. In 2008, they had 16 different plastic badges that looked like punch cards, and somehow or other eventually gave you a PDP-8 program that would decrypt some text and, well, that one was a bit crazy and nobody solved it.

But ShmooCon V, held February 6-8, 2009, is what I’m finally getting around to writing about. This was the year that Shmoo finally gave in to their inner moose. The badges were large plastic rectangles, with a Moose sihlouette cut out. Along each long edge of the badge was Morse code, and across the neck of the moose a barcode was diagonally printed. The codes, and the contest that goes with them, were generated by G. Mark Hardy, a frequent contributer to ShmooCon, DEFCON, and the security industry in general.

If you’d like to try to solve this for yourself, then STOP reading now. The rest is full of spoilers. G. Mark has made the original badge codes available, which will give you as much as we had after collecting all 8 badges.

The first problem was figuring out how many badges there were. We “collected” badges from other attendees and eventually decided that there were a total of 8 different badge codes. Speaker, security, shmoo, and other staff-like badge variants simply differed in the color of the plastic, but the codes were all one or another of the 8 variants.

Another problem was how to read the Morse code. Holding the badge horizontally, and reading the code left-to-right across the top edge, made the most sense. But then, do you read the bottom at the same time, or do you turn the badge 180 degrees and read the other line also left-to-right? An interesting thing about Morse, most of the codes also mean something in reverse. As we collected badges, we eventually determined that it was read across the top, turned, and then read across the “other” top.

The next problem was the barcodes. They were harder to read, because of the diagonal printing and quality of the screening. Fortunately, I had a buddy with me (gypak) who got pretty good at squinting at the codes and reading them live off the badge. Really, since the barcodes matched the Morse (that is, no two badges with the same Morse code had different barcodes), we only had to identify 8 barcodes.

Eventually, we ended up with 8 strings of text, and 8 barcodes to go with them:


Unfortunately, this is about as far as we got during the con. We tried some simple attacks, and some outlandish ones, but simply didn’t have access to a computer and weren’t really trying it on paper. So most of the work that follows happened offline over the next several days. G. Mark was kind enough to keep a running tab of all the hints and suggestions on a webpage, and we (and several others) kept at the puzzle via email and twitter for some time.

At some point, gypak noticed an error on one of the printed badges — what looked like a “Z” (- – . .) was instead supposed to be a “G” (- – .). But nobody was far enough along that this was really significant at this point. (This was the second badge in the list above — changing MOOSENUGGET to MOOSENUGZET.) Seems this error crept in somewhere between the original submission and the final pre-print proofs, and got missed all along.

We continued to attack the problem for days. Early on, I started calling the leftover letters on each badge “telomeres,” after the “junk DNA” sequences that don’t seem to mean anything. Turns out here they did mean something. But the question was, are they the only ciphertext? Depending on how you read the text (like is it “Moosle Defense” or “DefenseS”), you could end up with exactly 32 telomeres — so are we to use just those as a ciphertext? Or use everything all at once? It seemed unlikely that we could use everything, because that would mean you have a ciphertext that partially reads in English (sort of). Still, we tried everything. Viginere, various Caesar shifts, Playfair, etc., etc., and got nowhere. Used SHMOO, SHMOOCON, SHMOOCONV, etc., as keys, again, no dice. I really liked “SHMOOCONV” as a key, since that was also printed on the badge (reinforcing that “everything you need is on the badges”).

The one thing that we knew we had right so far was the ordering of the badges. The first digit in the barcodes suggested the ordering, and the presence of a “GMARK” autograph in the last column sealed it for us. Beyond that, though, we were pretty much swimming in a sea of crazy ideas.


We’re also looking hard at the barcode data. Every badge had almost exactly the same barcode, though the 1st digit varied from 1-8, and the last digit seemed to be almost random. We considered for a while that the first and last digits somehow worked as pointers to string the badges together, like a linked list, but as it turns out that last digit was just the barcode checksum, and really not part of the puzzle at all.

Of course, the digits in the middle of each barcode, the same on each badge, were terms in the well-known Fibbonacci sequence: 1 1 2 3 5 8 13 21. So we had to work that into the puzzle somehow.

We were also receiving hints from G. Mark all along. Some of these were helpful, while others just seemed to muddy the waters somewhat. At some point, I started adding characters up in columns and in rows, but didn’t go too far. Finally, I apparently got really close one night, but had made some kind of transcription error and had things off by one or so (I really can’t remember).

In the end, what needed to happen was for us to adjust the row locations based on the numbers from the barcodes. So for the first two badges, the text needed to start in column 1, the third badge, in column 2, then 3, 5, 8, etc. The last badge wrapped around and started in column 5.

Unfortunately, there was no clear indication that this was done properly (no new word popped out at us or anything), though G. Mark did try to show how certain letters in the telomeres should be marked. These spelled out BRUCE, HEIDE [sic], XO, and of course GMARK. None of these were really lined up at any point, so it was hard to see them. But the point was, once you’d marked those letters, you were left with exactly 16 letters in the telomeres.


Doing the badge text shift based on the Fibbonacci numbers caused those to “align” such that no single column had more than one leftover telomere in it. Which sort of gives a sense of “cool, next stage done!”, but only if you’d been able to remove the other letters. Though it was sort of cool to see those names inserted into the code (even if Heidi’s name was misspelled), it really didn’t help any for solving. And wasn’t really necessary, since all you needed was the Fibbonacci offsets. So this wasn’t nearly as helpful as G. Mark hoped it’d be. 😦

To sum up so far:

  • Decode Morse code to get a 16-character string for each badge
  • Put the strings in order based on the first digit of each badge’s barcode
  • Apply the Fibbonicci numbers from the barcodes to each badge in turn, shifting the string to start at the column specifyed by each term of the sequence

Now the table is complete. Not long after having the flash of inspiration that got me past the whole BRUCE/HEIDE/lining-things-up confusion, I thought “let’s just add up all the columns.” Didn’t work. At the time, it seemed to make so much sense, was just so easy, but it didn’t work, and so I was quite disappointed. That was at about 22:00 or so, on Thursday the 26th.

Sometime the next morning, I tried it again. Apparently I must’ve made some simple error, because now the answer jumped right out at me. I sent G. Mark an email at 9:23, asking “Is the answer in pig latin?” Turns out I probably should’ve just given him the answer, as there was another team breathing right down my neck. Twenty minutes later, G. Mark replied “Aybe may. What’s your guess?” So a few minutes later, I saw his reply, and sent in my solution.

The official “winning entry” was therefore stamped at 9:55 am on Friday the 27th, three full weeks after the con started. But I could’ve gotten it a half hour earlier. In fact, I could have gotten it 12 hours earlier, if I’d not screwed up the shifting at home the night before. The runner-up team of Beakmyn, Grey Frequency, and Calypso sent in their answer at 11:08, 1:13 after my winning entry. So I’m really glad I gave it one more try that morning.

The final step, then, is:

  • Add up all the columns of letters, using modulo-26 arithmetic.

This means that A+A = B, A+B = C .. A+Z = A, wrapping around the end of the alphabet. For example, assuming A=1, B=2, etc., we get:

M + M + A + W + S + O + E + X
13 + 13 + 1 + 23 + 19 + 15 + 5 + 24 = 113

Modulo arithmetic means divide and pay attention only to the remainder. So 113 mod 26 is same as the remainder for 113 / 26. In this case, 9. And the ninth letter is ‘I.’ Continuing this for the rest of the columns gives the following text:


Drop the Zs (treat them as spaces), and you get:


Sending that phrase to G. Mark earned me a free ticket to ShmooCon 2010. Woohoo!

So, what did I learn? Mostly, I reinforced the already certain knowledge that, almost every single time, I try the most complicated, convoluted, crazy approach to a puzzle. And that’s never the right way. I also learned that it’s very difficult to give a useful hint that doesn’t give away too much, while also not leading the players down crazy blind alleys. Finally, I learned that when you think you’ve solved it, go for broke and give the judges the answer RIGHT AWAY, because you have no idea who might be just behind you.

In the end, though this puzzle was really hard to solve, the actual mechanics were pretty simple. No complicated ciphers, permutations, transpositions, substitutions, nothing at all like that. Just decode the Morse, put ’em in the right order, shift the rows, and add the columns. The trick was figuring all that out. And that might’ve been easier if I hadn’t been trying to solve complicated ciphers, permutations, transpositions, etc. Which is probably the most important lesson for these sorts of puzzles — chances are, it’s easier than you think.

Still, there’s nothing like the thrill of solving it, except maybe the thrill of knowing you’re the first to solve it!

[See also G. Mark’s official solution page, at]


Crazy Security Con Weekend!

April 23, 2010 Leave a comment

I don’t go to a lot of information security cons. I’ve been to all the ShmooCons (they’re local, after all), and to DEFCON 3 times (plus a couple of BlackHats back when the company was paying for the trip).  So, really, like 2 a year. That was pretty much my world — and I knew there were a couple others, but didn’t really pay much attention.

But in the past year, since I’ve started following a lot of security people on Twitter, I’ve realized just how many cons there are. This weekend alone there are three — two 1-day cons, and a 2-day con.  Crazy!

These are pretty small cons, but I realized a long time ago (when ShmooCon was still pretty small) how quickly one can be overwhelmed by all the talks in the schedule. So I started printing up little pocket-sized cards with the entire schedule — then I’d laminate them and give them to all my co-workers attending the con with me. Really made figuring out where to go next much easier.

Last summer, I got an iPhone, just a few weeks before DEFCON. I’d already realized that DEFCON was much too big to fit on a single 3×5 card. Then it hit me — I’ll make a little web app for it, and use my phone! So I rushed something together, got it working more-or-less well, and asked a couple folks to check it out. I also told Dark Tangent about it, who told me that a native iPhone app was coming out, too.

Right after DEFCON, I started thinking about ways that we can make the phone-based scheduling system better. and one of the first thoughts was that it needed to be conference-independent. It had to support multiple cons, and not be simply re-written each time. I got in touch with the developers of the iPhone DEFCON app, and ever since we’ve been working on a solution to this problem.

We call it Khan Fu.

So far, it’s only a web app, but we think it’s pretty easy to use, and reasonably flexible. It’s been used for ShmooCon, BSidesSF, and this weekend, THOTCON, QuahogCon, and BSidesBoston [BSides data should be loaded later this evening]. It’s even got a snazzy HTML-5 enabled offline mode, for when your 3G connection goes all wonky.

So, if you’re going to Quahog or BSidesBOS (THOTCON is more than halfway done already), check it out!

Categories: Conferences Tags:

Blind Belief vs Excessive Skepticism

April 20, 2010 6 comments

I’m going to go out on a limb and say that I’m still skeptical about the whole “Gizmodo’s got a 4th generation iPhone” story. Yes, it looks a lot like it could be real. And they’re saying all the right things. But the one thing that I can’t get over is this: they’re only saying those things.

There’s still no real proof. Everything we know about this comes from Gizmodo (other sites with pictures claim to have only received those photos, none of
them have actually handled the unit).

I’m not saying this is a hoax. I don’t think any of us really know enough to say one way or the other. What I am saying is that we’re all jumping up and down over what’s really not more than a few well-done photos and videos. In the past, such photos have been met with disbelief. This time, not so much, for whatever reason.

Anyway, some specific points that I wish had been addressed:

  • There are no pictures of the phone turned on. They claim it had been remote-wiped, but that there was still a “Connect to iTunes” screen that appeared to be much higher resolution (to support rumors of a better screen). Why no pictures of that screen?
  • Also, there are claims of Apple logos on the internals of the device. Why no pictures of them? Sure, there’s a single photo of a wire harness and an empty case, but no chips. Not even the mainboard. One of the first things that I wanted to know was what networks would this work on, so chipset details would have been good to get.
  • Related to that — they say it uses a micro SIM. I’ve never seen a micro SIM before. It would have been nice to see that, with a comparison to a regular SIM. What carrier is it on? Is there an adapter to use the micro SIM in a normal phone? Try that, tell me what carrier it wants to use (even if it’s disabled, I’d think it should at least come up with a carrier ID).
  • We’re told that the computer identified this as an iPhone. Why no details? Did it come up as “iPhone3,1” or “iDev2,2” or something else equally interesting? Did you plug it into a Linux box and see what you can get there? USB details, screenshots, movies, etc… all would have been nice to see.
  • Has anyone tried to restore a backup to the phone? Would that even work? Even a failure would be interesting. Perhaps a remote-wipe prevents such a restore, or maybe iTunes would refuse because it didn’t recognize the specific model, but again, that’s something I’d expect to have been at least discussed.
  • “Well, they got a letter from Apple, that proves it!” I’m pretty sure there are enough copies of cease-and-desist letters from Apple floating around the net that anyone could make a convincing-looking letter with only a little trouble. Actually, an interesting angle — couldn’t anyone in the area forge a letter from Apple, arrange a pickup, and walk away with a cool new phone? 🙂
  • There’s been no independent verification. Perhaps nobody else wanted to go on the record, but even a mention of “we offered to show it to unnamed high-profile bloggers, but they all refused” would have been a nice touch. But at least having one or two well-known personalities say “Yeah, I saw it too, and it looks legit” would have been worth the trouble.
  • Finally, there’s Occam’s Razor. Has Apple EVER lost development hardware like this before? There’s been plenty of press about the iPads provided to key developers before release, and the security on those was impressive. How’d a 27-year-old engineer get one out of the building? (unless he wasn’t authorized, in which case he’d really be in for a world of hurt).

Bottom line: I just don’t know. I want to believe it’s a real iPhone, just because it does look nice and appears to have all the features we’ve been jonesing for. On the other hand, have we ever seen all of our rumored features materialize on a new iPhone release? Pretty convenient that they’re all there (well, except for T-Mobile or Verizon, which they didn’t demonstrate).

But setting aside emotions, wanting to believe, and simply looking at the evidence, I remain skeptical. If only because, as I said, all we’ve seen is photos and movies of the outside, and a couple distant or ambiguous pictures of the internals. And a letter.

On the flip side, though, is this question: Would Gizmodo really have it in so bad for the entire community that they’d try to play everyone with an elaborate hoax? That too, seems unlikely.

So, again, I just don’t know. It would have been nice to see more details, and especially to get some independent verification, but still…it’d be hard to really know for certain unless Apple publicly admitted it.

There’s also been a lot of talk about the ethics of this, if it is a real phone. Is it ethical for a journalist to pay $5000 for a phone that they know isn’t the seller’s personal property? Is it illegal? Certainly, Gizmodo hasn’t signed any NDAs, but Trade Secret law can be odd, especially (or so I’ve read) in California. On the other hand, the phone wasn’t marked proprietary or secret or anything, so you might argue that Apple hasn’t really tried hard to protect it. (You might also argue that letting a young engineer take it out drinking isn’t too responsible either).

I suppose they could claim that they paid for the chance to look at it for a few days, fully expecting to turn it over to the real owner once they’ve come forward. And, really, if someone were to leave a prototype next-generation Prius, doors unlocked, in the parking lot at Car and Driver — would we really expect them to not take a boatload of pictures while they waited for the owner to come back?

I’m not quite ready to totally villify Gizmodo for this. If it’s all true, they might’ve cost someone his job — though even if they’d immediately hand-delivered it to Apple headquarters, possibly in exchange for brownie points, the guy’s job might still be in jeopardy. And if Gizmodo’s job is to break stories, then they did exactly what you’d expect. All I can say is I’m glad I’m not in a business where I have to make that kind of decision. And that alone is a reason I’m not going to judge them, either way.

Now I’ve rambled. Probably too much. So let me sum up:

  • Cool pictures.
  • Cool anectodal evidence, but no photos/videos to back those up.
  • No independent verification, other than photos on other sties.
  • Possible confirmation from Apple, but even that has no indepentent verification.
  • Apple’s never lost something like this before (that I can remember).
  • Simplest answer: We’ve been had.
  • Most exciting answer: It’s all real.

Honestly, I’m not sure which of those two options I want to be true.

And realistically, as long as the next iPhone officially supports T-Mobile, so I can stop doing the jailbreak / unlock dance, then I won’t personally give a damn what new features it has. 🙂

Categories: Rambling Opinion

Half-Baked Idea: Isolate Browser Security Contexts to Limit XSS Attacks

April 14, 2010 Leave a comment

So I was reading yesterday about the Cross-Site Scripting attack against And it struck me that there might be an easy way to reduce or eliminate a lot of these attacks, using better isolation within the browser.

Essentially, my thought boiled down to this: Why, when I load a page in the browser, should that page have access to cookies from another server?

“But it doesn’t,” you might say. “The same-origin policy on cookies prevents one page from accessing another server’s cookies!” True. But if the malicious page manages to convince your browser to load a page from the target server, with its own cookie-stealing XSS code injected, then that malicious page, indirectly, has access to those cookies.

So let me rephrase the thought: Why should a web page be permitted to load a page (and worse, execute code) in an unrelated session?

The reason this is possible, to my not-very-XSS-savvy mind, is that the browser really has only a single Security Context (set of cookies, session information, login credentials, etc.). (It’s possibly more complex than this, but the description should hold at a basic level for this discussion). If you log into a web site, then that session information is stored at the browser level, and any window, tab, or frame within that current browser instance has access to that information. The same-origin policy can help to restrict what information a page loaded within those views can access, but the information is still there.

Which leads to my half-baked idea: Within the browser, isolate session information (the session’s Security Context) to only those pages directly related to that session.

This is best described by way of example [Note that I’m using Gmail and Twitter for simplicity, and don’t don’t mean to suggest that any XSS bugs currently exist in either service 🙂 ]:

  • I log in, fire up the browser, and open Gmail.
  • Then, I open a second tab and log into Twitter.
  • Later in the day, I click a link in a tweet for a “really awesome photo that you MUST see!”
  • This opens in a 3rd tab (or possibly stays in the Twitter-focused 2nd tab), and contains what really is an awesome photo, but also includes a hidden iframe.
  • That iframe fetches a page from Gmail that includes an XSS-injected script.

Now, here’s where my crazy idea and reality diverge. In the current situation:

  • The browser fetches that page, and so it gains access to the browser’s global Security Context.
  • The XSS script executes within that context, and since the page came from, also passes the same-origin test and therefore has access to my Gmail cookies.
  • The script then sends my Gmail session information to the attacker.
  • Now the attacker has my Gmail account. (boo!)

In my suggestion to isolate security contexts, here’s what happens:

  • As before, the image page is loaded in either a 3rd tab or the tab containing the Twitter feed (depending on how I opened it).
  • However, this tab is not the tab with Gmail, and thus does not have access to that Security Context.
  • Therefore, when the invisible iframe is loaded, the script will not be able to steal my Gmail cookies.
  • XSS attack thwarted. (yay!)

Now, you may be thinking, “bye-bye to multi-window web applications.” But that doesn’t need to be the case. There’s certainly nothing preventing the browser from letting a new tab or window inherit a tab’s security context when a user requests a new window from within that session. It’s only when the browser loads a new page, outside of a session’s same-origin boundaries, that the security context is forfeit.

Now, you may be thinking, “bye-bye to single-window web browsing.” But that doesn’t need to be the case. If you’re in (for example) Gmail, and click a link for an external site, that link can still load in the current tab. For the time being, that tab loses access to the Gmail context. But if the user, having read the linked page and chuckled appropriately at the joke within, then clicks “back” in the browser, the context should switch back to Gmail.

Would this have helped in the recent Apache attack? I think so. From what I understand, the attack went like this:

  • User is logged into issue tracking application.
  • User reads a bug report entered into that application by a malicious user.
  • Report includes a link to an external site (in this case, a URL shortening service).
  • That link redirects to another page from the bug tracking application, with XSS-injected code to steal the user’s credentials.

If the browser had the sort of context-limiting controls I’m envisioning, then the privileged session’s credentials would have been “lost” as soon as that external URL was requested, and not regained, even though it eventually redirected back to the originating site.

I will note, however, that if the URL had never “left” the site (was not obfuscated through a redirector service, but was clearly within the current context’s originating domain), then this approach wouldn’t have helped. But hopefully, then, the URL would have been ugly enough (containing embedded XSS code) that the user might’ve noticed the problem before clicking on it. Or better yet, it would have been rejected outright and never made it into the malicious user’s bug report in the first place.

To try to simplify, I’d propose starting with the following rules:

  1. Credentials (temporary cookies, current login sessions, etc. — the “Security Context”) shouldn’t be shared across browser views (windows, tabs, frames, invisible code-executing sandboxes, etc.).
  2. Within a view, the Security Context should not be shared with subsequently-loaded data with a different origin. This includes both full replacements of the current view (navigating to a new page) and subordinate views displayed or processed within the requesting page (iframes, etc.).
  3. A Security Context can be shared with a new tab, window, etc., when the user executes something within the session that requests that new view, provided it’s still within the same origin as the current site.

At the moment, I can only think of one real drawback (though I’m sure there are more). If, for example, you’re clicking on a bunch of different links, all of which lead to a site at which you have an account, then each of those clicks will pull up that content as a general, unauthenticated, first-time-visiting user. So if you have a current YouTube session open, and click on a YouTube link in a tweet, you’ll have to re-authenticate within the window that click opens in order to post any comments to the video. Perhaps the user could be permitted to drag such a link into a previously-authenticated session window, but unfortunately we then start to diminish the level of protection.

I’ve already been pointed at two projects that start to address this. The first, Caja, appears to be focused more on the secure development of a page in the first place, not as much on XSS vunerabilities. The second, CookiePie, has some of the features I suggest but is implemented in a manual fashion, with the goal being simultaneous use of a website by multiple different accounts. Neither solution really gets to the deep-in-the-browser XSS prevention that I think isolated Security Contexts could automatically provide.

So how crazy is this? Am I missing something obvious, or is it just chance that we haven’t tried to do this already? Or are there security features already present in browsers that do this, just not in a way I’ve noticed? It seemed, once I thought of it, absurdly simple, but then again, I thought of it, so that’s sort of a given.

I welcome any thoughts and criticisms (but still hope that there might be some value here).

Categories: Crazy Ideas, Security

It’s Time To Start

April 14, 2010 Leave a comment

I’ve got a blog on another site. Sort of. It’s never updated. It’s been over a year since the last posting, and, frankly, that’s embarrassing. However, I’m constantly thinking of things that I want to talk about, that won’t fit into the limitations of Twitter or Facebook status updates. But because I never post anything, I don’t post anything new, ’cause then I’d look like an idiot who never posts anything.

So I’m pulling a fast one, and putting content out here instead. Maybe, if I actually start doing stuff here, I’ll also start posting on my other site, and I can merge the blogs later. And then I won’t look quite as much like an idiot.

Though if you’ve read this far, then you know the truth. Congratulations, both of you. 🙂

Categories: Rambling Opinion