BlueConic provides several prebuilt AI notebooks marketing teams can use (without writing any code) to apply the power of AI to your customer data, for example to predict which BlueConic dialogues will perform best among customer groups. Applying AI modeling to your dialogue optimization results can provide you with deeper insight into how your dialogues are performing.
The Advanced A/B testing notebook lets you select a BlueConic dialogue and examine how variants of this dialogue would perform against it. It performs a Bayesian statistical analysis on the results of the A/B test, and helps you determine next steps to take. The notebook's analysis requires that the dialogue you select has a control group.
Creating and configuring an A/B testing notebook in BlueConic
- Select AI Workbench from the BlueConic navigation bar.
- Click Add notebook.
- A pop-up appears. Scroll down to the Advanced A/B testing notebook and click it.
- The notebook opens to the Notebook editor window.
If you have the Notebook editor permissions, here you can view the notebook's Python code and get detailed documentation about how the notebook works, and how the machine learning model uses customer data and dialogue metrics for running the A/B modeling and analysis. - Click Parameters in the left-hand panel.
- In the Dialogue parameter, click to select an existing dialogue that has variants that are shown to a subset of customers and visitors. Note: The notebook's analysis requires that the dialogue you select has a control group.
When you run the AI notebook, it will analyze results to see how the variant dialogues perform against the original version. - Save your settings before running the model.
Running the Advanced A/B testing notebook
- Go to the Schedule and run history page.
- In the metadata section at the top of the page, you can request email notifications each time the notebook runs or only for failed runs. For details, see: setting up email notifications for AI Workbench.
- Click Run now to run the analysis manually.
-
To schedule the import and export for a future date, activate Enable scheduling. Click the Settings icon . Select how to schedule the import by choosing an option from the dropdown list:
- Every X minutes
- Number of times per day
- Days of the week
- Days of the month
- Weekday of the month
Set a time for the import. Click OK.
Viewing the results of your AI Workbench A/B testing
After running the notebook, you can view its output by clicking Preview. Scroll down to the Management summary section. Here you'll find A/B test results and advice on how to proceed. Further down the results screen, you'll find a graphical representation of the A/B testing analysis.
The sample graph below shows a posterior distribution showing how a set of dialogue variants performs against the original dialogue. In this example the original dialogue (furthest to the right, shown in blue) is the best-performing dialogue, but it has fewer results than the purple variant second to the right. Using Bayesian statistics, the notebook finds the area of overlap among competing dialogues to see which one is likely to perform best (blue wins).
Further down, you can see the expected lift distribution for a variant against the original, showing the region of practical equivalence between two similarly performing dialogues. The results show the original dialogue is expected to out-perform its variant, but more data is needed.
You can use the Notebook all cells insight to create a sharable dashboard with A/B testing analysis graphics.