I’d like to share a few methodologies that worked particularly well for RidePal, but for confidentiality reasons, I can only explore initial design and not findings or results. There’s still much to consider!

RidePal sees the importance of having a clear picture of our user’s experience. We want to see a trending increase in overall satisfaction, be able to track satisfaction in aspects of the user experience that we’re addressing. The plan is to sample a user’s experience–at the beginning–once per quarter, and gradually work towards real-time experience sampling.


I analyzed our userbase, and determined that our survey would have to address three distinct user groups:

  1. Current riders
  2. Past riders
  3. People who have registered but not used the service

For Current Riders, I aimed to capture their overall satisfaction by using three questions to get at that one construct (questions 2, 3, & 4 below). I wanted to isolate aspects of their experience that were particularly good or bad (question 5), and wanted to see if feedback on those aspects correlated with the perceived importance of those aspects. Finally I captured data for product planning (I was in early phases of producing a mobile app), and basic demographics. I controlled for bias by randomizing the order of questions 5 & 6, randomized the sub-questions on questions 5 & 6, and placed questions 1-4 at the top for similar order effect biases.

Past riders were given the same survey below and allowed to choose their own user-type classification (question 1) since I could segment users that have not used the service recently during post-hoc analysis. I aimed to identify the main reasons for rider attrition.

People who have registered but not used the service were given a different survey examining why they had not yet ridden.


I was looking for the relevant department at RidePal (e.g., Product, Operations, Customer Service etc) to take relevant action with the results, so I knew that the survey results would have to be representative. Through sample size calculation, I aimed for 95% confidence with a confidence interval of +/- 7. That is to say, that if I found a result that our users were Extremely Satisfied with Wifi, I would give the result to Operations and say “Wifi is great! I’m 95% confident that 80% of our users, +/- 7% are ‘Extremely Satisfied’ with our Wifi”.

To achieve results with high external validity, I needed the users to first take and then complete it. This came down to a good e-mail subject line, an e-mail that compelled action, and a survey that was short and intelligible. To accomplish these items, I pulled in sufficient information from our database to segment out users for number of times ridden, and included their names in the distribution.