Use tests or audience buckets to prevent cross-contamination and keep data clean.
Running multiple variations on the same page can help you understand how combinations of changes affect performance. This approach — similar to multivariate testing — lets you identify the best-performing combination of variations. Typically, Optimize attributes results correctly, so you don’t need to take extra steps. But if you're concerned about cross-contamination between experiments, you can use the methods below to keep your data clean.
Run each experiment as a test optimization
Test optimizations show variations to 100% of visitors on a page. Because traffic is evenly split, you can treat the results of one optimization as noise while analyzing results from another. Learn how to create a test optimization.
Bucket visitors to isolate each experiment
Optimize randomly assigns visitors a number from 1–100. The "visitor bucket” assignment sticks with the visitor on return visits, allowing you to target specific groups consistently.
- Create a rules-based audience for each visitor bucket group
- Apply one audience to each optimization
For example, say you want to test a version of the navbar on every page and test different CTAs on a specific page. To avoid overlap:
- Create Audience A (buckets 1–50) and Audience B (buckets 51–100)
- Apply Audience A to the navbar optimization
- Apply Audience B to the CTA optimization
This ensures that Audience A never sees the CTA changes, and Audience B never sees the navbar changes.