Open redirects are a persistent problem on the web -- they have been included in the OWASP Top Ten in both 2007 and 2010 as "unvalidated redirects and forwards."
The troubling thing is that unlike other flaws, such as XSS, where the difficulty of solving the issue scales along with the complexity of the application, open redirects are easy to fix.
A simple solution
If you are only redirecting within your own domain or a small group of partner domains, this is very easy to validate using a regex -- just make sure that the regex closes with a slash -- you want to make sure that http://example.com/ is allowed and http://example.com.evil.com/ is not.
Consider the following redirect where u= representing the URL being redirected to:
http://hackerco.de/redirect.php?u=http%3A%2F%2example.com
To make sure that the redirector can only redirect to other pages on the same domain, this would work:
$url=urldecode($_GET['u']); if (preg_match("/^https?:\/\/hackerco.de\//", $url)) { header("HTTP/1.1 302 Found"); header ("Location: ".$url); } else { //Error handling or interstitial page }
This will only scale so far, especially in a high-performance scenario. If you only need to allow redirection to a domain or two, this works well. If you need to whitelist a large number, then you end up performing a large number of regex operations for each request to the redirector.
Obviously, this doesn't work at scale.
A scalable, performant solution
Instead of centrally managing a regex whitelist, you can add a token to the url that authorizes the redirect, in this case t:
http://hackerco.de/redirect.php?u=http%3A%2F%2example.com&t=strong_token
Maintaining a large set of key value pairs that map the token to url it authorizes has the same problem as a large number of regex checks -- it hinders performance with a large flow control (if/then) block and it becomes difficult to maintain the list across multiple servers when you scale beyond a single machine.
To solve this, the token needs to be inherently matched to the URL it authorizes, not matched by association. An approach I favor is concatenating the URL with a strong secret and then hashing it. This gives you a token that can be appended to the redirect link.
Consider this example that generates a link:
$url = "http://hackerco.de"; # The URL to redirect to $secret = "some_secret"; #secret, ramdom string $token = hash(sha256, $url.$secret); print "<a href=\"http://example.com/redirect.php?u=".urlencode($url)."&t=".$token."\">Hackercode</a>";
When the secret is also known by the redirect, it becomes very easy to validate a redirect requickly. This example validates the token and redirects:
$url=urldecode($_GET['u']); $secret = "some_secret"; $token=$_GET['t']; if ($token == hash(sha256, $url.$secret)) { header("HTTP/1.1 302 Found"); header ("Location: ".$url); } else { //Error handling or interstitial page }
Even though this limits our redirector to "authorized" URLs, there is still some business risk --simply because you intend to allow redirection does not mean that a given URL is safe. That's as much a business decision as it is a technology discussion, though.
Also, as a final note, this technique is vulnerable to brute force attacks if you use a weak secret. I recommend chosing a reasonably randomvalue such as a UUID/GUID. Since time is a component of exposure when defending against brute force, you may want to implement a rotation scheme that gives the secret a limited time to live; the greater volume the redirect processes, the smaller the TTL should be.
Comments