User Engagement

Initial Attempts to Gather Feedback

This past week, I set out to try and gather feedback from gaming groups I’ve been active in to see how well Jeff resonates with my target audience. Initially, I asked around in my online gaming friend groups and identified two individuals who have ChatGPT Plus access, hoping they would be able to help me give Jeff a spin. Unfortunately, both declined, explaining that they didn’t feel it was relevant to them as they don’t experience difficulties in social interactions offline. This experience highlighted a key limitation of basing Jeff on the GPT platform, as it requires a paid subscription to be able to use my CustomGPT, limiting access to a smaller group of users, and limiting the diversity of feedback I can gather and refining it for a wider audience.

Unplanned Testing Session in Class

To my surprise, I observed an unplanned testing session during a break in one of Richard’s lectures. Two classmates, Iggy (who identifies with the struggle on bridging his online and offline social behaviors) and Ziyi (who plays online games but without the same online-offline persona gap), started playing around with the visualization feature. They seemed to be really entertained, generating multiple visualizations of their archetypes without moving forward to the mental rehearsal phase. I followed up with them afterwards to understand their experience.

Iggy explained that he found the visualization component to be very engaging and “fun,” he kept generating new images to see the different variations Jeff can come up with. He mentioned that even those who don’t face the challenge of struggling with social confidence offline, like Ziyi, might find this feature fun too. I appreciated this piece of feedback, it showed me Jeff’s potential to appeal beyond it’s original purpose. The visualization tool itself seemed to have a strong hook, drawing users in and keeping them engaged.

Some visualizations of Iggy & Ziyi’s archetypes
The Potential Issue of “Stalling” at Visualization

However, this raised a potential issue: while the visualization feature hooks and engages users, it might also stall their progress through the Jeff’s intended purpose. By repeatedly generating new visualizations, users might be less inclined to move on to the more reflective and growth oriented stage of mental rehearsal. This made me realize the importance of structuring Jeff to maintain engagement while still encouraging users to continue past visualization.

Visualization Accuracy Concerns

Ziyi’s feedback also provided some insights, while she thought it was a fun experience, there were some problems that stood out. Since her online and offline archetypes were very similar, as expected, she received identical visualizations for Jeff. However, she felt that the visualizations didn’t fully capture the nuances in how she saw herself and her personality, which is why she went on to generate multiple more images to look for one that felt accurate. Iggy then also pointed out the need for more customization in the visualization phase, such as options for gender, style, or other personal attributes, which could make the archetypes feel more nuanced, accurate, and relatable.

Reflections and Next Steps

One of the main limitations I’m facing is the accessibility barrier due to the reliance on the ChatGPT platform, which requires users to have a ChatGPT Plus subscription. This creates a bottleneck, as it restricts the pool of potential testers to those who have subscribed to the service and identity with the issues Jeff aims to address. So far, finding individuals who meet both criteria has been challenging. To work around this, I’m considering a more localized approach for testing. Rather than just trying to find people who meet the criteria in my networks online, I’ll also try to find people in my local network nearby who resonate with the issue Jeff addresses, such as social anxiety or identity gaps. I can then have them test Jeff directly on my laptop, so to circumvent the need for a subscription. While this is not ideal for scalability, it’s more practical in the short term. The reason why I’m confining this initial testing phase in my network is because it aligns with the observation in my previous testings, where many individuals in my target audience tend to be uncomfortable interacting with complete strangers, so focusing people within my own online and offline network or friends of friends will likely create a more comfortable testing environment. This should allow for more authentic engagement and feedback without any added social pressure that might come from interacting with unfamiliar people.

Another challenge that came up this week was around the engagement structure itself, specifically, that suers could potentially get stuck in the visualization phase, generating new images over and over instead of moving on the core mental rehearsal exercises. While the visuals add an engaging element, there’s a risk of them unintentionally becoming the main attraction, diverting users from the primary goal of the tool. To address this, I’m considering ways of subtly prompting users to move forward after a few visualizations, keeping them engaged without letting them linger too long on the visualization feature.

Lastly, I’ve received feedback suggesting that the visualizations themselves could benefit from more nuance, such as options for specifying gender or other basic traits. While I agree this could increase relatability, I’m cautious about making the visualization process too detailed, as it could shift the tool’s focus away from behavioral reflection and into something more like an avatar generator. Instead, I might add a couple of simple prompts at the beginning to capture key traits that users feel are most relevant to their identity, keeping it personal without over-complicating the setup.


Leave a Reply

Your email address will not be published. Required fields are marked *