Stitch Fix Vision: Designing an AI-powered personalized image gallery

5 MINUTES READ

Launched June 2024

Stitch Fix Vision: Designing an AI-powered personalized image gallery

5 MINUTES READ

Launched June 2024

Company

Stitch Fix

Date

Launched June 2024

Role

Lead Product Designer

Some experiences don’t start with a problem—they start with an idea.


Vision is one of those projects. This case study walks through how our team took a bold, innovative idea in the fast-moving landscape of AI and turned it into a user-centric feature that is live and loved by over X clients.

Stitch Fix has always been about personalized outfitting, and with new AI capabilities on the market, we wondered how we might take outfitting a step further.


This wasn’t just about new technology—it was about imagining the next level of personalized shopping, ahead of what the market thought was possible.


We set out to make the idea real before AI image generation was fully mature, defining an experience that would feel seamless, engaging, and true to Stitch Fix’s approach to style.

Some experiences don’t start with a problem—they start with an idea.


Vision is one of those projects. This case study walks through how our team took a bold, innovative idea in the fast-moving landscape of AI and turned it into a user-centric feature that is live and loved by clients.


Stitch Fix has always been about personalized outfitting, and with new AI capabilities on the market, we wondered how we might take outfitting a step further.


This wasn’t just about new technology—it was about imagining the next level of personalized shopping, ahead of what the market thought was possible.


We set out to make the idea real before AI image generation was fully mature, defining an experience that would feel seamless, engaging, and true to Stitch Fix’s approach to style.

The idea was this: What if we could use Generative AI to create inspiring, photorealistic images of clients wearing Stitch Fix merchandise, in a natural real-world setting?

The idea was this: What if we could use Generative AI to create inspiring, photorealistic images of clients wearing Stitch Fix merchandise, in a natural real-world setting?

I had the opportunity to work as the sole designer on a small, nimble team that included research, data science, and engineering. Though this work stream had high visibility with our leadership team, the working team was kept intentionally small to optimize for speed.

Together, we set out to make this ambitious idea real.
I had the opportunity to work as the sole designer on a small, nimble team that included research, data science, and engineering. Though this work stream had high visibility with our leadership team, the working team was kept intentionally small to optimize for speed.

Together, we set out to make this ambitious idea real.
Our first goal: Build an AI Pipeline to start generating images we could test with

We partnered with an agency to build the backend and refine photo quality. Early testing of outputs helped us understand what we would need from users—from photo inputs to contextual details—to deliver realistic results.

Early user testing showed problems in the AI model that were off-putting to users


We prioritized testing our image quality as often as possible, and our first round of user testing showed us that we had a lot more work to do on the image quality before this technology could be production-ready.


We were seeing problems the model making everyone look skinnier, more muscular and taller then they actually were. The AI was also hallucinating badly when it came to clothing, which we know was a crucial part of this experience.

As our tech team worked on improving the model, I used low fidelity UI designs to ideate the client experience


We knew we wanted to share AI images with clients, but we didn't yet know how that could work in our experience. Where should this feature live? How should it work? All of that was unclear, which was where these early explorations came in.


I intentionally worked in low fidelity in order to communicate feature ideas without getting stuck on how our exact images would look.


I presented these ideas to our executive leadership team in order to gather feedback and input on the direction of the project. Working in low fidelity allowed us to debate the feature strategy without getting stuck on the UI details.


As our tech team worked on improving the model, I used low fidelity UI designs to ideate the client experience


We knew we wanted to share AI images with clients, but we didn't yet know how that could work in our experience. Where should this feature live? How should it work? All of that was unclear, which was where these early explorations came in.


I intentionally worked in low fidelity in order to communicate feature ideas without getting stuck on how our exact images would look.


I presented these ideas to our executive leadership team in order to gather feedback and input on the direction of the project. Working in low fidelity allowed us to debate the feature strategy without getting stuck on the UI details.

Once we had improved image quality, we gathered a diverse focus group to hear first-hand what our clients thought about our the images, the outfits, and AI in general


It's always such a privilege to talk to clients and potential users in a small group setting. For this focus group, we again shared personalized imagery with the groups and discussed their thoughts on using AI imagery to shop.


We even brought back some of the same participants from our first round of testing, to hear what they thought about the images now that we had improved the image quality!

Design explorations and feedback loops with our executive team helped refine the final experience

I designed multiple design approached, iterated rapidly over several weeks, and used each review cycle to sharpen both the vision and the execution.


I also led alignment with cross functional teams and partners, like engineering and marketing. Marketing had early interest in featuring this as part of a future campaign, so I ensured they were consistently informed and incorporated their perspective into the design process.


These collaborative loops helped us converge on an experience that was both compelling for users and well-positioned for launch.

Designing the details: The final Vision onboarding experience walks the client through step by step


Vision requires photo uploads—a moment where clients often hesitate, so we used a friendly approachable flow to guide each step, showing examples, and explaining exactly how their photos will be used.

Vision Gallery: Images intentionally drop weekly, building anticipation and making each batch feel intentional


The weekly drop became a foundational part of the Vision experience.


By pacing new images as curated, time-based releases, we created a reason for clients to come back regularly and made each set feel meaningful—not just more instant content.


This approach also reinforces Vision’s differentiated value: it evolves over time, and users get to see that progress.


Post-launch, we are seeing revenue from Vision clients almost doubling! This validates that a predictable reveal cadence can meaningfully increase revisit behavior and purchase behavior.

This experience launched in June 2024, and has been a huge success for Stitch Fix.

Vision was launched with an accompanying PR campaign which led to a 10% bump in overall traffic when it was announced.
This experience launched in June 2024, and has been a huge success for Stitch Fix.

Vision was launched with an accompanying PR campaign which led to a 10% bump in overall traffic when it was announced.
And, we've continued to improve image quality, as AI capabilities improve.


You can see how the quality has changed over time from these examples from my own Vision drops!

Results in the numbers

0%

0%

Image satisfaction

0%

0%

Image satisfaction

0x

0x

revenue generated

0x

0x

revenue generated

0k

0k

Early adopters

0k

0k

Early adopters

0%

0%

Reactivations

0%

0%

Reactivations