Nature of Man
… nothing that happens to Man is ever natural

31 October 2007, Wednesday

The Curse of Magic Quotes

Filed under: Uncategorized — Mordred @ 00:27

Much has been said about this brainfart of a feature, and attempts at reverting its behaviour are common for all php coders. Old versions of the php manual were giving this function in their “Best Practice” example:

// Quote variable to make safe
function quote_smart($value)
   // Stripslashes
   if (get_magic_quotes_gpc()) {
       $value = stripslashes($value);
   // Quote if not integer
   if (!is_numeric($value)) {
       $value = "'" . mysql_real_escape_string($value) . "'";
   return $value;

… which has apparently became a bad meme in more than one way. I have already mentioned (as many others did as well way before me) the blunder of using is_numeric(), but a more alarming mistake is the way magic quotes are handled. It isn’t bad just because it’s buggy, but because it gives a seriously flawed idea.

It is also bad, as it is (was) given as an example to newbies, and even after it was replaced in the manual, one can still see the meme “in the wild” in snippets posted on sites and fora. What’s worse, the manual didn’t explain what was wrong in the code before it was replaced; as I see it, maybe they still have no idea what wrongness they have been teaching.

So, after you’ve had a paragraph of reading time to think about it, do you see it? The bug lies in the assumption that the data that is given to quote_smart() comes from $_GET, $_POST, $_COOKIES, $_REQUEST, etc. If you pass something else (a constant string, a value from a file or database, etc.) containing slashes (for a superficial example, take a smb-style path: “\\host\share\file.ext”) and you have magic_quotes on, the function will blindly run stripslashes, and thus damage the string (“\hostsharefile.ext”).

The function also doesn’t check for the magic_quotes_sybase setting, which completely changes the way magic quotes are handled.

The correct way of negating magic_quotes is of course globally, at script startup, with proper setting checks and while keeping in mind that those arrays may contain other arrays.

But enough about the bug, it is something that happens with certain inputs under certain setups, and it damages the data in a way that doesn’t really affect the security. So what’s the big deal, is it worth writing so long a rant about it? I think so, and the reason lies in that implied assumption above, about where the data comes from. What it basically says is that input comes only from one of the input superglobal arrays, which is wrong. Input comes from all kinds of sources and you never know which of them may be under the control of an attacker.

So data coming from other places, take the database for example, gets labelled as “secure” in the mind of the coder, and he readily inserts it in a dynamic SQL query and thus second-order SQL injections are born. Such are the curses of bad memes, caveat coder.

17 September 2007, Monday

The Unexpected SQL Injection

Filed under: Uncategorized — Mordred @ 20:58

I am pleased to announce that WASC has published a paper I wrote for the security articles project. The project is very nice, because all articles must pass through a critique and voting process by a peer group of security professionals (and I am proud to be among them when not wearing the hat of an author).

The Unexpected SQL Injection
(When Escaping Is Not Enough)

We will look at several scenarios under which SQL injection may occur, even though mysql_real_escape_string() has been used. There are two major steps at writing SQL injection resistant code: correct validation and escaping of input and proper use of the SQL syntax. Failure to comply with any of them may lead to compromise. Many of the specific issues are already known, but no single document mentions them all.
Although the examples are built on PHP/MySQL, the same principles apply to ASP/MSSQL and other combinations of languages and databases.

Full text: [HTML] [TXT] [ZIP (examples)]

23 August 2007, Thursday

CORE GRASP (potential) pitfalls

Filed under: Uncategorized — Mordred @ 20:39

Core Security anounced their new security solution CORE GRASP. In short it is meant to recognize between “good” and “tainted” (i.e. comming from the user/attacker) data and stop certain functions (like mysql_query()) from working if they are given tainted data which contains dangerous symbols.

I have experimented with similar good/tainted recognition techniques myself (though not at the low level they do it) and am convinced that this is a viable way to help testing for various kinds of injections (SQL injection, XSS, code execution etc). Note the emphasis on testing here, in my view, production environments would benefit from such solutions only if the overhead is significantly lower. On the other hand, if coders use this as a taint-testing tool during development, they can benefit from it right away.

As Steffan Esser already pointed out, currently the overhead of handling the tainted flags is way too big (~30% according to Core). He also points out several problems with the code (which is after all in its infancy yet).

Code problems aside (which can be fixed more or less easily, I’m sure), from a quick read of their paper, I have seen two (potential, I haven’t yet tested them) serious design flaws in this product, which may not be so easy to correct. (Or, may not exist, caveat lector)

First, “tainted data” is not a trivial concept. That $_GET/$_POST/$_COOKIE contain tainted data is obvious, ditto for $_SERVER (although they don’t mention it in the paper, gotta check the source). The trouble is that tainted data can also come from the database, local files and maybe other sources. Since “taintedness” is lost when something is put in the database, an application that relies on GRASP to stop SQL injections for example, will be unprepared for second-order injection (based on data from the database that the attacker inserted in a previous step). It can be argued that the “dangerous” data would be stopped the first time, but this really depends on the implementation of both GRASP and the PHP code being protected . Note that with second-order injection, the first step contains only benign data (for example escaped quotes) that get dangerous only when handled at another step. Also, two benign pieces of data may be combined into an injection string. So either GRASP will be so unforgiving as to stop - say - valid use of quotes in user input, or it will let them in, allowing the second-order injection attempts to go under its radar.

The second trouble is with their implementation of SQL grammar parser (again, according to the paper, not the actual code).
They say:

The protection mechanism for injection attacks can be modeled by a
Finite State Machine (FSM for short) which allow a formal representation of
well-formed strings. The FSM evaluates a predicate and then answers true if
the string does not represent an exploit, and false if it does. We can design a
FSM for each kind of vulnerability, allowing a precise per-character analysis
in order to perform security checks detecting vulnerabilities in cross language
boundaries (e.g., SQL inside PHP, Javascript inside HTML, etcetera.)
(… snip …)
The FSM for this protection was based on MySQL’s lexical analyzer.

(Take this with a pinch of salt though, it is long since I last read my textbook in discrete mathematics)
The trouble is that FSMs can only work on regular languages, while SQL is in a recursive language. For example you can do this:


Meanwhile a FSM will only work for a finite number of such recursive steps. True, MySQL itself will too accept a finite number of those, but, it will be a big enough number to make a simulation of such a “finite recursion” with FSMs unfeasible. Thus a possible attack against GRASP’s FSM would be to use a large number of nested parentheses (with or without other SQL tokens) and wait for the FSM to run out of states. (This will happen sooner than later, because if a non-recursive language requries N states, adding just one level of recursion at M points will need NxM states, adding two needs NxM2 states, and so on; it quickly gets out of hand.)

20 July 2007, Friday

Quid Probat Ipsos Probaturae (*)

Filed under: Uncategorized — Mordred @ 22:57

(*) … err, “Who Checks the Checkers”? Luckily, it’s a dead language, which means there’s no one to make me write this a hundred times correctly :)

Checkers has been solved, the best a perfect play can achieve is a draw game. Now I’m waiting for the xkcd strip about it :)

In other news, we will soon (July 23 and 24) be able to witness a much harder battle between Man and Machine than the Deep Blue one. The way they remove luck as a factor is neat.

The poker challenge isn’t as significant in the public imagination as IBM Deep Blue’s victory over world chess champion Garry Kasparov in 1997, but it’s more significant in terms of artificial intelligence science, Schaeffer says.

Chess has proven to be a much easier game for computers to play than poker because all the chess pieces are on the board and there are no unknowns. In poker, you don’t know your opponents’ cards, so you must make your decision based on imperfect knowledge.

“Poker is a much better representative of real-world situations with imperfect information, negotiation, bluffing and misrepresentation,” Schaeffer says. “This makes it much more interesting. From a scientific point of view, it’s a harder problem because of that.”

11 May 2007, Friday

A Rhapsody in Deep Blue

Filed under: Uncategorized — Mordred @ 12:50

On this day 10 years ago, the human race got an inferiority complex. A computer, Deep Blue, beat Russian Garry Kasparov, the greatest chess player on the planet, and mankind’s place in the order of things was reshuffled.
(A Decade After Kasparov’s Defeat, Deep Blue Coder Relives Victory)

Gnh, gnh, gnh, let me restate that:

On this day 10 years ago, the human race chess players got an inferiority complex. A computer, Deep Blue, beat Russian Garry Kasparov, the greatest chess player on the planet, and mankind’s place in the order of things was reshuffled. finally got to understand that computers are better than humans at searching narrow game trees.

Well, call me back when you solve Go…

sudoku, chess, go

(Btw a variant of Mancala has been solved as well). Note that chess is not actually solved in the sense that a winning strategy is known for all situations, it’s just that computers are better than human players. The skill of the current computer Go opponents, on the other hand, places them in the middle of the amateur kyu range, which is way too far from even the amateur dans, let alone the pros (the rating scale is not linear in nature; moving from 6th dan to 7th dan is much harder than moving from 1st dan to 2nd dan).

« Previous PageNext Page »

Powered by WordPress