Security
Every Pleras experiment is JavaScript that runs on your live site, with the same access to cookies, storage, and the DOM as your own code. This page explains what we check before an experiment reaches you, how we prevent XSS, and how you can verify it yourself. For the full set of pre-delivery checks, see the Quality Assurance guide.
What you're being asked to trust
When you deploy an experiment, you're running code we generated on your origin. Two classes of threat apply, and the check is designed to close both:
- Capabilities we could introduce directly. If an experiment loads a
<script>,<iframe>, or<link>from an arbitrary origin, callsevalornew Function, spawns a Worker, or fetches from a host we don't control, it becomes a path for code or data we didn't review to execute on your site. The remote-code-execution rules prevent this class of threat at the source: we can't ship an experiment that pulls in or evaluates external code outside a narrow, reviewed allowlist of vendors. See Approved-vendor allowlist. - DOM-XSS sinks a third party could weaponise. If an experiment reads a dynamic value from the page (a URL parameter, a cookie, a DOM text node, an API response) and passes it through an HTML or attribute parser, an attacker can craft input that becomes executable markup on your origin. The DOM-XSS rules prevent this class by blocking every pattern that lets dynamic input flow into such a sink.
The first class is about what we could accidentally or indirectly bring in. The second is about what a third party could do to an experiment we've already delivered. Both need to be closed before an experiment can reach you, which is why the check scans for both.
What we check for
Before any experiment can reach your dashboard, its code is parsed into an abstract syntax tree and statically scanned against 14 rules covering two categories of unsafe behaviour. The check is automated and deterministic. The same rules run against every experiment, every time, with no option for human override. An experiment that fails any rule is blocked from delivery.
Using an AST rather than text matching means the check can distinguish between a string literal the agent wrote and a value assembled from dynamic input, which is the distinction that matters for XSS. Comments and strings that mention banned APIs aren't mis-flagged, and subtle obfuscations don't slip past.
Remote code execution
The first category covers any pattern that could cause code from outside the experiment file to execute. These are blocked:
| Pattern | Policy |
|---|---|
eval(), new Function(), dynamic import() |
Always blocked. No legitimate use in an experiment. |
document.write, document.writeln |
Always blocked. No legitimate modern use. |
new Worker, SharedWorker, ServiceWorker |
Always blocked. |
XMLHttpRequest |
Always blocked. Use cases are covered by fetch. |
<script>, <iframe>, <link> injection |
Blocked unless the source URL is a string literal (not computed at runtime) and the host appears on the approved-vendor allowlist. |
fetch() |
Blocked unless the URL is a literal that points to your own site (a relative path, or an absolute URL whose hostname is the same as the page the experiment runs on) or to a host on the allowlist. |
Approved-vendor allowlist
A blanket block on external resources would rule out a lot of legitimate CRO work. Adding a Trustpilot review widget to a product page, embedding a Stripe payment element at checkout, or testing a support-chat prompt with Intercom are all standard experiments that need a third-party script or iframe to function.
The allowlist is the controlled escape valve for these cases. It's a short list of specific vendor hosts we've reviewed and approved for use in experiment code.
Every host is added deliberately. If an experiment tries to inject a <script>, <iframe>, or <link> from a host that isn't on the list, or tries to fetch from an unknown origin, it is rejected automatically.
If you'd like a specific vendor added because you already use them, get in touch and we'll review and add it.
How a DOM-XSS attack would work
Before the rules themselves, it's worth seeing the DOM-XSS threat concretely. The textbook attack against a client-side experiment looks like this:
- An experiment reads a dynamic value on the page (a URL parameter, a cookie, a DOM text node, an API response)
- That value is written into an HTML or attribute sink (
innerHTML,insertAdjacentHTML, an inlineon*handler, ajavascript:URL) without sanitisation - An attacker crafts a URL that puts a malicious payload into that dynamic value, and sends it to one of your users
- The user clicks; the payload executes on your origin with full access to cookies, session, and the DOM
No compromise of your site or ours is required. The only prerequisite is an experiment that moves untrusted input into an HTML sink. The DOM-XSS rules block every pattern that would make this attack possible.
DOM-XSS sinks
These are the rules that block the attack above. They cover patterns where a dynamic value could flow into an HTML or attribute parser and be interpreted as markup or code:
| Pattern | Policy |
|---|---|
.innerHTML, .outerHTML assignment |
Value must be literal-derived: it must trace back through local variables, concatenations, and template literals to string, number, or boolean literals declared in the same file. Function arguments, function return values, and DOM reads are rejected. |
insertAdjacentHTML(pos, value) |
Same rule applied to value. |
setAttribute('on*', ...) |
Always blocked. Event handlers must use addEventListener. |
el.onclick = "..." (string right-hand side) |
Blocked. A function is allowed; a string is not. |
.href, .src, .action, .formAction, .data, .codeBase assignment |
Blocked when the value is a bare variable that could contain a javascript: URL. Literals, or a literal safe-prefix concatenated with a dynamic segment, are allowed. |
The innerHTML rule is the one that prevents the attack in How a DOM-XSS attack would work above. The moment an experiment tries to interpolate a URL parameter, a cookie, or any DOM-derived value into HTML, the check rejects it before delivery.
How this fits into delivery
The static check runs after an experiment is built and before any of it can reach your dashboard. An experiment that fails is:
- Removed from the delivery queue (it cannot be surfaced in your dashboard)
- Retained on our side as a flagged file for audit, with the rule ID and line number that triggered the failure
- Recorded in our internal error log, never surfaced to you as a buildable experiment
Because the filter runs upstream of everything you see, an experiment that violates any of the listed rules cannot reach your dashboard. This check is one of several. See Quality Assurance for the full set.
How you can verify it yourself
- Read the code. Every experiment is a single, self-contained, human-readable JavaScript file. You can open it in any editor and check for yourself that it doesn't call
eval, doesn't build HTML from URL parameters, and doesn't talk to any server other than yours or an allowlisted vendor. - Search for the patterns. A quick search for
eval(,Function(,innerHTML,insertAdjacentHTML,document.write, orXMLHttpRequestin an experiment file will show you what's there. If you find any use ofinnerHTML, inspect what's being assigned. It should always be a literal string, never a variable sourced fromlocation,document, or a function return. - Use a Content Security Policy. If your site enforces a strict CSP, it will catch any unexpected script source or inline handler at runtime. See Content Security Policy in the Developer Guide for how experiments interact with CSP.
- Use your platform's review gates. If your A/B testing platform supports per-experiment review or approval, enable it so you can inspect each experiment before it goes live.
What's out of scope
We're explicit about the boundaries of the check so you know what it does and doesn't defend against:
- Upstream XSS. If your own site renders attacker-controlled content into the DOM before our experiment reads it, and our experiment then reads that DOM text, the attack succeeded before our code was involved. No client-side check can see past the DOM boundary. The right defence is your server-side output encoding.
Reporting concerns
If you spot something in an experiment that looks unsafe, even if our check passed, please tell us. Every report is reviewed, and the validator is updated to catch that class of issue if it doesn't already.