Mijem

UX Research
project overview
Mijem is a mobile app that connects young adults primarily through the buying and selling of second-hand items. I aided the redesign of the product by conducting usability tests. I also assisted with research and eliciting feedback on Mijem’s new website and web app.
project brief
What: Usability tests with over 25 users across 10 mobile flows.

Role: UX Researcher

Client: Mijem

Note: as Mijem's final product has not yet released, all images are from the previous product.

contents

part 1: research strategy

  • Strategy session
  • Research objectives
  • Assumptions
  • Participant Criteria

part 2: research process

  • Test Script and Tasks
  • Outcomes Document
  • Synthesized Results
  • Priority Matrix
  • Compiled Report
  • Findings Presentation

part 3: feedback session

  • Methodology Selection
  • Adoption Incentives

introduction

Mijem’s mobile app began as a secondhand marketplace app for university students and was gaining traction across campuses. But with COVID-19 and students no longer on campus, the app needed to branch out to other young adults and their needs.

Mijem's previous product, focused primarily on a student second-hand marketplace. Users could browse products within their communities, centred around schools.

part 1

research strategy

research strategy

Before I began any research, I set up a meeting with Mijem’s founder and CEO to discuss Mijem’s business goals. The purpose of this meeting was for me to better understand the product and company, which would help me set research objectives that delivered the most useful results. 

The following are examples of a few research objectives:

assumptions

After setting research objectives, I made a list of assumptions I had related to these objectives. Crafting assumptions makes me aware of my own potential biases and allows me to tailor questions and strategies to validate or invalidate those assumptions.

The following are examples of assumptions related to the checkout flow:

participant criteria

Together the business strategy, research objectives and assumptions informed my participant criteria:

part 2

usability research process

research plan document

Before conducting a usability study, I prepared my research plan, which includes the test script, tasks, and an outcomes document.

test script and tasks

My test script was organized in the following outline:
The tasks were carefully constructed based on the research objectives for the set of flows that were being tested. For example, the objective “discover whether a user can complete the checkout process” involves two actions from the user:
I rephrased these actions as tasks to represent a more real-world scenario: “Imagine you’re looking to purchase a hoodie under $25 from someone in your area. Can you use Mijem to find a hoodie that meets your criteria?” (FINDING task) and “Now could you use Mijem to purchase that hoodie?” (PURCHASING task). Rephrasing tasks in real-world language primes the user to behave similarly to how they would outside of a usability test.

conducting studies

While running participants through the session, I asked them to speak out loud as much as possible, while I primarily stayed silent. When participants asked questions directly to me, or made statements that were ambiguous, I used the Boomerang, Echo, Columbo method.

The Boomerang method, or answering a question with another, worked particularly well. For example, when a participant asked “what does this mean?”, I would return with a question “what do you think it means?”

Most participants were happy to guess, which provided insight into whether a user actually understood the design but just lacked confidence. For particularly important parts of the design, I was able to follow up and ask "why were you unsure about that feature?" so we could address the confusion in future iterations.

recording and synthesizing data

As I ran the studies, I kept notes in the form of quotes from the participants. By keeping record of quotes, rather than writing my own interpretation, I was able to reduce confirmation bias and work with raw data. 

To synthesize data, I organized my findings into the associated tasks, so I could compare results between participants. I then analyzed data in terms of usability impact by considering how many participants struggled, and the degree to which they struggled. This was not a rigid scientific process, but rather an evaluation given my experience in UX Design - a process fitting for a qualitative study. I then wrote all problems identified on a priority matrix (usability impact by effort) and presented the list to Mijem’s designer for review.

report and presentation

For each session I conducted, I compiled key findings into a report and presented a summary of the results to Mijem’s CEO and Lead Designer to implement changes. The results of these usability tests resulted in some minor and some major design changes, such as the removal of features and the reorganization of navigation.

Overall, this entire process was repeated across four sessions, each with five Gen-Z participants.

part 3

feedback session

overview

In tandem with a new app design, Mijem’s designer also created a new webpage, and web app. Initially, the CEO wanted to run a feedback session with the new webpages and a separate usability session with the web app.

method selection

Considering the research objectives for this request, I recommended we look at the website and web app in the same session to see how well the web app matches users expectations after seeing the landing page. In particular, I wanted to test how users perceived the flow of finding the website, signing up for an account (via desktop) and then using the web app. Since Mijem’s primary business goal is to get more mobile app users, this website to web app flow needs to make a strong impression and naturally lead to downloading the mobile app.

adoption incentives

In addition to feedback, this session explored what financial incentives Mijem could offer to encourage early adopters of the product. Mijem’s marketing department offered a few options, such as prize entries, discounts, and cash-back. I was able to ask potential users about their impressions of the above incentives and why they felt the way they did.

conducting the session

As the purpose of this session was not to find usability issues, I ran this research more like user interviews, where I asked questions to understand users' behaviours, motivators and pain points. Soliciting feedback on designs in this manner allowed us to see whether users understood Mijem's concept based on the marketing website and how compelling certain features were to our users.

This session did come with a caveat however, as users are notoriously inaccurate when it comes to predicting their own behaviour. So in addition to asking for their impressions on the marketing page, I also asked them about other pages they have seen that were memorable, or other incentives they have taken advantage of.

part 4

conclusions

key learnings

I learned a lot from this process. Not only was it my first usability study conducted for a client, but it was a lengthy research process, providing opportunities for reflection and improvement after each session.

Check out my blog post detailing some of my key findings.
Read Post