An A/B Definition stores the amount of time to collect results, how to select a winner, and which List, Template, Segmentation Rule, and Suppression List to use for each A/B Test.
To Add an A/B Test Definition
- Go to TASKS > VIEW A/B TEST DEFINITIONS and then click Create new A/B Test Definition. The Add A/B Definition page will display.
- Enter a Name that is descriptive and short enough to select from a list.
- Enter an optional Description to allow other users to understand the purpose of the definition.
- From the option 'What is the maximum length of time the test should run?', select the length of time for the test to collect data, from one hour up to 72 hours. The test time defaults to 24 hours, or one day.
- From the the option 'What are you measuring?', select the type of data to measure, where Unique means one per contact, and Pieces Delivered is the Number of Pieces Sent minus Unique Hard Bounces:
- Unique Open Rate: Number of Unique Opens divided by Pieces Delivered.
- Unique Click Rate: Number of Unique Clicks divided by Pieces Delivered.
- Click-to-Open Rate: Number of Unique Clicks divided by total number of Unique Opens.
- Responder Rate: Number of Unique Clicks and Opens divided by Pieces Delivered.
*WhatCounts recommends setting A/B Test Definitions to use unique click rate instead of unique open rate to account for proxy mail image servers such as Google, Yahoo, and Apple Mail Privacy Protocol opens. For more information about Apple MPP please read the WhatCounts Apple Mail Privacy Protection Article.
- From the option 'How will a winner be determined?', select the method to determine the winning sample:
- Automatically select winner and send remainder: Select the sample that has the highest rate and send the winning sample automatically to the remainder of contacts. In the case of a tie, send an email to allow the creator to select the winner.
- Automatically select winner, but allow me to decide: Select the sample that has the highest rate, but send me an email and let me decide if I want to send it to the remainder.
- Allow me to manually select winner: When the test period has expired, send me an email to allow the creator to select the winner.
- From the option 'When should we stop collecting data for the test?', select when to stop collecting data:
- At the end of test period (hours defined above)
- When a specific threshold is reached or when the rate has reached the defined percentage, from 1 to 99. This is based on the Samples. For example, if the threshold is Open Rate of 25%, the first Sample to have 25% of contacts open the message will cause the test to stop collecting data and determine the winner.
- Select a List. The List must have at least four contacts and cannot be a Super List, Seed List, or have Sticky Campaigns enabled. As soon as you select a list, the samples will automatically be defined and set to two samples at 20%. If you do not see a List available, exit the page and verify that your list has at least four contacts, then return to create your definition.
- Select a Segmentation Rule. This is optional.
- Select a Suppression List. This is optional.
- If you have selected a Segmentation Rule and/or Suppression List, click UPDATE COUNTS to make sure the number of contacts in the samples is accurate.
- The default sample size is two samples that equal 20% of the total number of contacts.
- To add a sample, click Add Sample, or hover over the right most sample and click the + (plus) button. The maximum number of samples is 10.
- To remove a sample, click Remove Sample, or hover over the right most sample and click the x button. The minimum number of samples is 2.
- When you add or remove samples, the number of contacts per sample will adjust automatically. To manually adjust the number of contacts per sample, do one of the following:
- Select Show by percent and then either specify a percentage in the textbox provided, or drag the button on the slider bar just below the sample count display.
- Select Show by fixed sample size and enter the exact number of contacts to include in each sample.
- NOTE: To create samples of equal size with no separate 'Winner" sample at the end of the test period, drag the slider to the far right, or set the Show by Percent option to 100%.
- Select the portion of the message you wish to test with your samples. You may select any combination of one or more of the following:
- Subject Line defined in one or more Templates
- From Address defined in the List
- Content defined in Templates
- If you have selected to Automatically select winner and send remainder, there is a possibility that a tie will exist between multiple samples. If this is the case, select the sample to send as the winner if such a tie exists. This is an optional setting and only valid for an automatic send. If the tie breaker is not one of the tied samples, you will still receive an email to manually select the winner. **
- For each sample, click SELECT to choose a Template. If you do not include Content as part of your test, you must select a single Template to use for each sample.
- If you are testing the Subject Line, enter a unique Subject for each sample. You may include Template Tags in the subject line, similar to within a Template definition, such as Custom Data, Contact Data, and Fillin Fields.
- If you are testing the From Address, enter a unique From Address for each sample. You may include both the Decorative portion and the email address in the following format:
"Decorative" <mailbox@domain.com>
- Click SAVE to save the definition and return to the list of A/B Testing Definitions, or CANCEL to close the A/B Testing page without saving.
** The option Who will be the Winner in case of a tie? is new in v18.07.