A/B Testing for Expo & React Native Apps

If you’re building an app using Expo or React Native, chances are you’ve considered running A/B tests. But what tools do you really need? I’ll talk you through the process and show you that it’s much easier than you might think.

Timothy Daniell
4 min readJan 18, 2021

What is Expo?

Expo is an open-source platform for making universal native apps for Android, iOS, and the web with JavaScript and React.

Expo has become the platform of choice for many React Native developers. I’ll be using it for the tutorial below, but the same approach should work for any React Native app.

What is A/B Testing

A/B Testing is a product development technique, that allows you to methodically compare the efficacy of different versions of your product, by running two or more versions in parallel.

Should I just use one of those A/B Testing tools?

You could do, but these tools are typically expensive, and end up being a duplicate and less feature-complete version of your existing product analytics tool. The approach I outline below uses your existing tools, with minimal development work.

The Basics of A/B Testing

A/B Testing essentially requires 3 things:

  1. Sort users into buckets.
  2. Build two different experiences (“variants”), and show them to users depending on their bucket.
  3. Track conversion rates for the variants, and measure the statistical significance of the difference, to determine a winner.

How do we do that with Expo?

Let me demonstrate each of those 3 things by building a small demo. We’re going to A/B test a line of text and conversions to clicking a “Buy” button.

Step 1: Sort users into buckets

This is pretty simple — randomly assign your users into your test buckets.

const items = [“Bucket A”, “Bucket B”]const item = items[Math.floor(Math.random() * items.length)]

Note: if your users will see the screen more than once, you’ll want them to stay in the same bucket so that their experience doesn’t change. This can be achieved by either storing and retrieving their bucket value, or by replacing the above function with a hash function that takes their user id as an input.

Step 2: Build two variants

Make a copy of the component you want to change for the test, and then simply show one of the components depending on the user bucket.

{item == “Bucket A” && (<Text style={{ fontSize: 48 }}>Persuasive Text A</Text>)}{item == “Bucket B” && (<Text style={{ fontSize: 48 }}>Persuasive Text A</Text>)}

Step 3: Track conversion rates

In order to evaluate your experiment, you need to track conversion rates for each variant.

Firstly, track when users “enter” the experiment, and which bucket they got assigned to. I’m using Amplitude for event tracking, but the same approach will work with your product analytics tool.

Amplitude.logEventWithPropertiesAsync(“experiment:entered”, { bucket: item })

Secondly, track when users “convert”.

Amplitude.logEventAsync(“experiment:converted”)

Build a funnel in your analytics tool, starting with the enter event and ending with conversion event. Then group your funnel by the bucket in the first step, to see conversion rates for each variant.

Many analytics tools include significance calculations, like shown below in Amplitude, but if yours doesn’t you can plug the counts from your funnel chart into a significance calculator like this one.

Building an Experiment Process

Hopefully the above helps you run your first app A/B test, and you feel pretty great about it. But you might start asking yourself questions like:

  • How long should I run each test for?
  • How should I decide which tests to run?
  • How many variants should I include in each test?
  • How do I run more complex tests across several screens?
  • Can I run several tests at the same time?
  • Do I need to re-run tests if something else in my product changes?
  • What’s the best way to roll out winning variants?

Through my agency Permutable, I help companies to set up a structured experimentation process to meet their product and growth goals, and answer questions like the above. Please get in touch if you’d like some help.

What are you testing?

I’d love to hear what app you are building, and what you are thinking of A/B testing, in the comments below.

--

--

Timothy Daniell
Timothy Daniell

Written by Timothy Daniell

European internet product builder. Formerly Tonsser & Babbel, now consulting at permutable.co & building curvature.ai

Responses (1)