Upon examining the jQuery parseJSON function, I discovered that it essentially performs a basic regex validation:
parseJSON: function( data ) {
if ( typeof data !== "string" || !data ) {
return null;
}
// Remove leading/trailing whitespace to accommodate IE limitations
data = jQuery.trim( data );
// Validate incoming data as JSON
// Adapted from http://json.org/json2.js
if ( /^[\],:{}\s]*$/.test(data.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, "@")
.replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, "]")
.replace(/(?:^|:|,)(?:\s*\[)+/g, "")) ) {
// Attempt to use native JSON parser for modern browsers
return window.JSON && window.JSON.parse ?
window.JSON.parse( data ) :
(new Function("return " + data))();
} else {
jQuery.error( "Invalid JSON: " + data );
}
},
If the check passes and if the browser is current, it utilizes the native JSON parser. In the case of older browsers like IE6, a new function is employed to return the object.
Question #1: Given that this method relies on a straightforward regex test, could it potentially be vulnerable to obscure edge-case exploits? Would it be wiser to implement a comprehensive parser, especially for browsers lacking native JSON support?
Question #2: How secure is
(new Function(" return " + data ))()
compared to eval("(" + text + ")")
?