Optimize your push and email messaging with OneSignal’s A/B testing to improve engagement and campaign performance.
A/B Testing helps you optimize messages by testing up to 10 variants and evaluating performance based on metrics like open rates and click-through rates. This allows you to improve user engagement and make data-driven messaging decisions.
Marketers focused on Lifecycle and Growth can use insights from A/B tests to make impactful improvements that align with broader business goals. Whether you’re testing push notifications or emails, A/B testing allows you to experiment with different designs, copy, calls-to-action (CTAs), and more. For example, test whether:
A/B testing is only available when sending through the dashboard. It is not available with the API.
To create A/B tests with Journeys, use Journey Split Branches.
When creating push and email campaigns through the dashboard, click the A/B Test button and Add Variant to create additional tests.
Your Audience (included and excluded segments) will be all the users eligible for the campaign.
After you create the variants, you can select a portion of your audience to receive randomized variants i.e. 25% means 25% of the segment(s) selected will randomly receive one of the variants. A message with 2 variants (A & B) targeting 25% will have 12.5% of the audience get variant A and another 12.5% get variant B. A message with 10 variants (A-J) targeting 25% will have 2.5% of the users get each variant.
Image showing percentage scaler for variants.
By default, 25% of your audience receives the A/B test. For valid results, each variant must receive enough users. The more variants you use, the larger your test group must be.
If you set 100%, the message will be sent evenly to all users in the audience and eliminates the ability to send the “winner” to remaining users.
Messages that are sent as A/B tests will be marked as such under the Messages Tab. Click into the report for each test to see the full report or view the variant-specific reports.
We provide some statistics for you to view the performance and choose a winner. The below screenshot describes how to view different stats and select a winning variant. We will then send the winner variant to all the remaining members of your target audience.
Image showing the ability to select a winner from the A/B Test Report.
Image showing A/B Test Button
Image showing how to add more variants
Select the percentage of your target audience that should receive each variant. See How A/B Testing works for more details.
The percentage is applied to the total audience, evenly distributed across each variant. Examples:
Any percentage other than 100% will allow you to select a winner.
View results under Messages > Push > A/B Tests Tab. Click any test to view variant-specific reports.
Use performance metrics to manually select a winner.
Image showing A/B Test Button
Image showing how to add more variants
Select the percentage of your target audience that should receive each variant. See How A/B Testing works for more details.
The percentage is applied to the total audience, evenly distributed across each variant. Examples:
Any percentage other than 100% will allow you to select a winner.
View results under Messages > Push > A/B Tests Tab. Click any test to view variant-specific reports.
Use performance metrics to manually select a winner.
Image showing A/B Test Button
Image showing how to add more variants
Select the percentage of your target audience that should receive each variant. See How A/B Testing works for more details.
The percentage is applied to the total audience, evenly distributed across each variant. Examples:
Any percentage other than 100% will allow you to select a winner.
View results under Messages > Email > A/B Tests Tab. Click any test to view variant-specific reports.
Use performance metrics to manually select a winner.
Review past performance data so you can evaluate test success meaningfully.
Clearly define what you’re testing and what success looks like.
Control your experiment by isolating variables like:
Include your “usual” version as a baseline for measuring improvements.
Create control groups by tagging users randomly and exclude that segment. You can Export user data and create a segment from the CSV import.
Send all variants at the same time to avoid timing bias.
Iterate based on results to continuously optimize message performance.
Not within the standard message form. However, you can test different segments using Journeys with Split Branches and Yes/No Branches.
Not yet. You must manually choose the winning variant or use 100% to send all variants.