Skip to main content

Add variants

Experiment-variants

The next step in designing your experiment with Percept Insight is to create at least one variant. Percept Insight will compare these variants to the control, which is generally the current version of your application or user experience. By doing so, Percept Insight can measure how each variant performs relative to the existing performance of your app, providing insights into which changes may lead to improvements.

Create and manage variants​

Percept Insight will automatically generate your control and initial variant, which is named "variant_key_1" by default. However, you can edit this variant and assign it a different name if desired.

  • Steps to Create a Variant:
    1. Assign a Name to your variant. The name should be a string that you’ll use as a flag within your codebase. The name must be shorter than 50 characters.
    2. You can add a Description to clarify the variant's purpose. This step is optional but recommended.
    3. Optionally, you can include a Payload, which is a JSON object that allows you to set a variant’s experience dynamically without the need for additional coding.
    4. Allocation: By default, Percept Insight will evenly distribute traffic across your variations. However, you have the option to direct more traffic to specific variations by adjusting the distribution. Manually set the traffic percentages for each variation, ensuring they total 100%.

Dynamic Experiment Adjustments with Payloads: Suppose you're testing a new onboarding flow for your app. Initial results might suggest that altering the sequence of steps could improve user engagement. Instead of diving into your codebase to implement these changes manually, you can make the adjustments in a payload. Percept Insight will automatically apply these updates to the experiment, allowing you to refine and optimize the onboarding experience without needing to modify the core code.

While there’s no cap on the number of variants you can include in an experiment, it's important to avoid overloading it with too many. Doing so can make it challenging to achieve statistical significance. Aim to keep the number of variants to a manageable few.

In Percept Insight Experiment, the first variant in the list automatically serves as the control, labeled as "O".

User Assignment and Consistency​

Once an experiment is launched and a user is assigned to a variant, their assignment remains fixed. Even if you modify the distribution of the variants during the experiment, the already-assigned users will not be reallocated. The new distribution will only apply to new users who become eligible after the change.

Adjusting Rollout and Distribution​

To achieve the desired variant allocation efficiently, you can increase the rollout percentage so that newly eligible users are included under the updated distribution. This way, Percept Insight ensures consistency throughout the experiment, preventing scenarios where a user could experience one variant, then another, and then revert back to the original variant.

By managing distribution changes effectively, you maintain user trust and avoid disrupting the experiment's integrity. This ensures a smooth and reliable testing experience without users facing inconsistencies.

Why Evenly Distributed Allocation is Beneficial​

When conducting an A/B experiment, using an even distribution of traffic across the control and variants ensures that each variant is tested under similar conditions. This approach reduces the risk of bias and provides a clearer statistical picture of which variant performs better.

Understanding Control and Non-Experiment Eligible Users​

It's important to note that the control group typically represents the existing user experience without any changes, while users who are not eligible for the experiment experience the same baseline. Even though these groups experience the same conditions, the percentage allocated to the control group within the experiment still matters significantly.

Why Control Percentage Matters​

  • Statistical Significance: To achieve reliable results, it's essential to have a balanced and adequate number of users representing both the control and variant groups. If the control group is disproportionately large, the variants may not receive enough exposure to demonstrate statistical significance. This could result in inconclusive outcomes, where the experiment fails to identify a clear winner.

  • Even Exposure: For accurate comparisons, it's crucial that both the control and variant groups have even exposure to the user base. Even if the control experience is the same as that of non-experiment-eligible users, the control group within the experiment must be balanced with the variants to ensure that any observed differences in performance are truly attributable to the changes being tested, rather than to uneven traffic distribution.

Example Scenario: Even vs. Uneven Distribution​

Imagine you’re testing variants (O, A, and B) of a new feature in your app:

  • Even Distribution: Suppose you split the traffic evenly, with each variant receiving 33.3% of the total users. If Variant A shows a 10% relative improvement in user engagement, you can confidently attribute this improvement to the variant itself, as all variants had equal exposure.

  • Uneven Distribution: Uneven distribution can lead to the following issues:

    1. Large Control Group Bias: Suppose the Control (O) receives a large percentage of traffic, say 80%, while Variants A and B each receive only 10%. In this scenario, none of the variants may reach statistical significance because the large proportion of traffic is directed toward the control. This leaves the variants with insufficient exposure, potentially rendering the experiment inconclusive.

    2. Misleading Significance: If the traffic distribution is uneven among the variants, such as O: 40%, A: 40%, and B: 20%, the experiment might achieve statistical significance. However, it could be challenging to determine whether the improvement is due to the variant's effectiveness or simply because it had more exposure. The uneven distribution introduces potential bias, making it harder to draw accurate conclusions.

Note: The variant name cannot be changed after launch, as it is used as a flag for data tracking.