Blog Update #6a – Pilot Test:
Upon analyzing our field study outcomes, we proposed two distinct design methodologies for experimentation. We carefully crafted an experimental schema, encompassing detailed procedures and consent forms, and executed a pilot study with two participants. This pilot study not only mirrored our intended recruitment strategy but also paralleled our field experiment's structure by dividing participants into two distinct groups: experienced and inexperienced sellers. This organization aimed to closely replicate the conditions and diversity of our main study's demographic. The participants completed interface assessments efficiently, without significant issues in the experimental design.
However, the medium-fidelity prototype revealed oversights in critical user scenarios. Utilizing Figma for the prototype's development and adopting a "Wizard of Oz" methodology, it became apparent that some peripheral functionalities were absent, detracting from the user experience. This realization led us to modify our experimental approach to ensure a more controlled yet adaptable interaction with the prototype. To address timing concerns, we integrated a timing mechanism equipped with a split feature, facilitating precise measurement of participant engagement. This iterative refinement process aims to enhance the efficacy and accuracy of our methodological approach in the experimental phase.
Blog Update #6b - Experiment Abstract:
We recruited 8 university students of varying selling experience via our campus network for our study, aiming to assess the platform’s usability across different user skills. Participants were categorized into skilled or unskilled groups based on a selling experience survey.
Two interface designs were evaluated: “Quicklist,” inspired by trading platforms, with a fast-scrolling menu; and “Boxed,” resembling traditional platforms, with a segmented, exploratory design. We measured task completion times, errors, and gathered usability and enjoyment feedback through a post-experiment questionnaire.
The interaction between Interface and (Measurement)Type (specifically: “Likeness”/ “Time”) significantly impacted value (F=7.67, p<0.05), with interface A preferred over B and tasks completed faster on A, aligning with our hypothesis. This trend held across all experience levels, with both experienced and inexperienced users favouring A for likability and efficiency. Pairwise comparisons further confirm these findings, highlighting greater preference and quicker task completion on interface A.
Blog Update #6c – Annotated Output from Quantitative Analysis:
We utilized a linear mixed model (LMM) to examine the effects and interactions between our main variables: interface design (Design 1 vs. Design 2), type of feedback (Time taken vs. Likeness score), and participants' tech experience (experienced vs. inexperienced). The utility of a mixed model lies in its flexibility. It allowed us to account for both fixed effects (things we're directly manipulating or interested in, like interface design) and random effects (variations among subjects that we can't control).
To get these results first we had to sort out the data from the experiment and the questionnaire. The data was arranged in the following manner:
Participant | Experience | Interface | Type | Value |
---|---|---|---|---|
1 | 1 | A | Time | 19.81 |
2 | 0 | A | Time | 21.03 |
3 | 1 | A | Time | 18.69 |
4 | 0 | A | Time | 18.40 |
... | ... | ... | ... | ... |
Table-1 : Called a "Long-form" representation of data. Here, the data of 8 Participants was repeated 4 times to show the value obtained for each and every combination of: The "Experience" level, "Interface" design, and the "Type" of data which the value depicted.
In the table Experience alternated from "1" to "0" every other participant, Interface alternated from "A" to "B" every 8 participants and Type alternated from "Time" to "Likeness" every 16 participants.
Results:
Main effects:
We see that the interaction between Interface*Type had a F value of 7.67 with p<0.05, showing that the Interface combined with the likeness/Time had a significant effect on the dependent variable: Value.
Interaction between Interface*Type
interface A * Like had a higher mean value compared to B Like (p<0.05), showing people preferred interface A over B(going along our Hypothesis). As well as A*Time had a lower value compared to B*Time, showing people completed the interface A faster (Also going along our hypothesis, but did not show significant results)
Interaction between Interface*Type*Experience
Regardless of experience, people liked the interface A more than B. For time - people took lesser amount of time for A than B, and experienced people completed A faster than inexperienced people (Look at pairwise comparisons table and Mean difference column, it has a bigger negative number)
Graphs
Graphs depict Interface vs Estimated marginal means for both measures (1 = Time, 2=Likeness)
Blog Update #6d – Revised Supplementary Experiment Materials:
Changed the post-experiment questionnaire questions into a SUS form questionnaire (same for both interfaces):
Kommentare