A/B testing is a method used in business-to-business (B2B) contexts to compare two versions of a webpage, email, advertisement, or other marketing asset to determine which one performs better in achieving a specific goal, such as increasing conversions or engagement.
In A/B testing, the audience receiving the test should be randomly assigned to either Version A or Version B, ensuring that the groups are similar in characteristics and behavior. Each version is often referred to as a condition. An A/B test has two conditions.
It is also important to generate enough response to allow you to conclude that one or the other of the treatments really performed better. A couple of clicks more for condition A over condition B may not be a statistically significant (reliable) difference. While many factors go into determining what an adequate sample size for an A/B test is, think hundreds rather than dozens of responses per condition. See the section below of Statistical Power to learn more about finding adequate sample sizes.
A Two-Sample T-test is a good way to determine whether the results you get from an A/B test are truly different from one another.