Quality Assurance

Before any experiment reaches your dashboard it passes through a chain of automated checks. Each check either passes or the experiment is excluded from what you see. This page describes what we check and why. For the security-specific subset, see the Security guide.

The gates

Every experiment passes through the following gates before it's delivered. A failure at any stage removes the experiment from the delivery queue. No human can override a failed check.

Gate What it catches
Live-DOM selector verification Selectors that don't resolve against your live page
Browser execution Syntax errors, runtime exceptions, timeouts, silent failures
Visual verification Code that runs without error but produces no visible change
Safety validation Remote-code-execution vectors and DOM-XSS sinks (see Security)
Lint rules on selector breadth Selectors broad enough to affect unrelated regions of your site
Brand and tone conformance Copy and visual design that drift from your site

The sections below describe what each gate does and why it's there.

Live-DOM selector verification

Every selector in every experiment is resolved against the live page at build time, not against a cached copy or a static snapshot. This catches:

  • Selectors pointing at elements that don't exist on your site
  • Selectors that depend on framework-generated class names like .css-1a2b3c that change on every deploy
  • Selectors that match a stale DOM but miss the current one

If an experiment can't find stable, resolvable selectors for every element it touches, it isn't built.

After delivery, you can re-run this check yourself at any time. See Selectors and stability in the Developer Guide for the steps.

Browser execution

Once an experiment is generated, the code is injected into a headless Chrome instance pointed at the live target URL and actually executed. This catches:

  • Syntax errors
  • Runtime exceptions thrown during injection or activation
  • Infinite loops and long-running code that never finishes
  • Silent failures where the script runs but the DOM isn't touched

An experiment that errors or hangs is recorded as a build failure and excluded from delivery. An experiment that executes cleanly moves on to the next gate.

Execution runs in Chrome only. Cross-browser rendering differences (Safari, Firefox, older Edge) are not covered by this gate. The experiment code is written in ES5-compatible JavaScript for broad browser support, but final cross-browser verification is something to do on your side before launch. See Quality assurance before launch.

Visual verification

After the code executes, a screenshot is taken with the variant DOM and compared against the control. If the two are identical, the experiment didn't apply (it passed the code checks but didn't produce the intended change) and it is excluded. This is a last-mile catch for cases where the code ran without error but didn't do anything visible. Common causes include the framework re-rendering the modified node before the screenshot, or a selector that matched but the change had no effect.

The before-and-after screenshots for every experiment that passes this gate are attached to the experiment in your dashboard, so you can see exactly what the change looks like before deciding whether to run it.

Safety validation

Every built experiment is parsed into an abstract syntax tree and scanned against 14 rules covering remote code execution (eval, dynamic <script>/<iframe>/<link>/Worker injection, cross-origin fetches) and DOM-XSS sinks (variable-into-innerHTML, inline on* handlers, unsafe URL-property assignments). Any experiment that violates any rule is removed from delivery before it reaches you.

See the Security guide for the full list of rules, the threat model they defend against, and how you can verify them yourself.

Lint rules on selector breadth

Some selector patterns are technically valid but dangerous in production because they match more of the page than intended. Three patterns are flagged and fixed before delivery:

  • Universal selectors. querySelectorAll('*') matches every element on the page and is almost never what was intended.
  • Unscoped element selectors. A bare div or a in a query matches across your header, footer, navigation, and every unrelated section. These are flagged even if the experiment only intends to read the first result.
  • Broad ancestor scopes combined with text reads. Patterns like document.querySelectorAll('main div')[0].textContent = '' can blank out arbitrary regions of your page if the DOM shifts. These are flagged and rewritten to use scoped, specific selectors.

Brand and tone conformance

Every experiment is generated against two references extracted from your own site:

  • Visual style guide. A granular reference to your visual identity: colour palette, type hierarchy, spacing, button styles, and the shape and weight of cards and borders. Each generated component is styled to sit natively alongside your existing ones.
  • Tone of voice guide. A practical reference to how your brand writes: vocabulary, formality, sentence length, and how you handle calls to action and microcopy. Each piece of experiment copy is written against it.

Both guides are available in your Pleras dashboard, and you can tell us at any time if they've drifted (for example, after a rebrand or a redesign) so current and future experiments stay aligned.

What isn't automated

Some judgements are ones only you can make:

  • Whether the hypothesis is right for your business. Every experiment arrives with the hypothesis, the supporting evidence, and the reasoning laid out. Not as a finished recommendation to ship, but so you can apply your own judgement about relevance and priority.
  • Whether the copy sounds exactly right. The tone of voice guide catches the vast majority of cases. Ultimately, you know your voice best. If a line ever feels off, tell us and we'll fix both the experiment and the guide.
  • Whether the visual change fits in context. Same as copy: the style guide gets most of the way, but the final arbiter is you. Every experiment comes with before-and-after screenshots so you can see how it sits before you launch it.

What to check on your side

The checks above cover the integrity of the experiment as we deliver it. Once it's in your hands, there are checks only you can do: preview modes, cross-browser testing, Core Web Vitals, tracking confirmation, and so on. These are covered in Quality assurance before launch in the Experiment Setup & Monitoring guide.

When something slips through

If an experiment ever reaches your dashboard but fails one of the standards above (a broken selector, a copy line that sounds wrong, a visual element that feels off) tell us. Every report is treated as a QA regression:

  • The rule that should have caught it is either tightened or a new rule is added
  • The experiment that failed is rebuilt or removed, not patched in place