AM_DZNS// ◂ PREV
▸ READING · CASE 02 / 03
NEXT ▸
CASE STUDY ▸ INTERNAL TOOL WELLS FARGO · SMALL BUSINESS BANKING TECH

Procedures Updater

I led UX discovery and design for an AI-powered internal tool that automated the most tedious part of enterprise change management: keeping procedure documentation in sync with technology updates.

// Role
UX Lead — discovery, research, wireframing
// Timeline
~3 months, discovery through wireframe handoff
// Team
1 UX (me), 1 engineering manager, 1 scrum master/PO, 3–4 junior engineers (AI rotation)
// Time Cost: Before
8–10 hrs
per Change Management teammate, per week
// Time Cost: After
~30 min
reviewing AI-suggested changes
// Wireframe
2 days
once the process was understood
// First Build
<1 wk
working first pass via GitHub Copilot
This was proprietary internal work at a financial institution. Original files stayed at the bank — the screens here are faithful recreations.
// PROBLEM

At Wells Fargo, every core job responsibility had to be documented in formal procedures, reviewed and approved by someone outside the team before any technology change could go to production.

That sounds manageable until you realize: technology changes constantly.

In practice, this meant change management employees were doing two things that ate enormous amounts of their time. First, sitting through every single tech release demo call — sometimes hours long — just to spot anything that might affect a procedure they owned. Then, once they'd identified a change, manually rewriting the relevant procedure documentation themselves. We estimated this was costing employees upwards of 8–10 hours a week. Every week.

Nobody had built anything to fix it. And there was no way technology could go to production until those procedures were updated and approved. The bottleneck was real, and it was expensive.

How might we automate the detection of procedure changes so that the right people are notified with the right information, without disrupting the existing approval process?

[01]

Empathize — The Labyrinth

Before a single wireframe could be drawn, we had to understand the process we were designing for. That turned out to be the hardest part of the whole project.

On paper, the policy update process sounded straightforward. In reality it was a Russian nesting doll — every question we answered revealed three more layers underneath. (The team nicknamed this phase White Rabbit. As in, follow it long enough and suddenly you're very far from where you started.)

The core challenge: no one owned the full picture. The process touched multiple teams, multiple approval chains, and a web of forms and compliance requirements that had evolved organically over years. Nobody had ever mapped it end to end.

So we did.

I led the stakeholder interviews and process mapping sessions, leaning on my manager's two decades of institutional relationships to get us in the right rooms. My job was asking the right questions and synthesizing what we heard into something we could actually design against.

The goal was never to redesign the process. It was to work within it and remove the bottleneck without sidestepping a single compliance requirement. That distinction shaped every design decision that followed.

[02]

Define

From discovery, a few things became clear:

The right person had to get the right notification. Routing had to be precise across multiple procedure owners and teams. When a procedure they owned was flagged, they'd receive an email with a direct link to their review queue.

Trust was everything. Employees were being asked to rely on AI-generated edits to update procedure documentation. The tool had to make them feel confident, not anxious.

The process couldn't change. Two levels of approval were non-negotiable. The tool had to route through both, not around them.

[03]

Design

Once the process was mapped, the wireframing took two days. That's not a flex — it's a reflection of how much clarity the discovery work had bought us.

The Feed
A table-style landing page showing all procedures flagged for review: procedure name, owner, category, last updated date, status, and an AI confidence score giving users an at-a-glance read on which updates needed careful attention and which were routine.

[ The Feed — table view of flagged procedures with AI confidence scores ]

The Review View
Clicking in opened a summary of changes first, then an optional full text view: side-by-side comparison of the existing and AI-recommended procedure, with removals in red strikethrough and additions in green underline.

[ Review View — side-by-side diff, existing vs. AI-recommended procedure ]

Edge case: one code change, multiple procedures
When a single change impacted multiple procedures, the summary view surfaced each one with clear subheaders. The full text view used collapsible sections so users could work through them one at a time without losing context.

The confidence score
My idea, and one of the most well-received parts of the design. For a tool asking employees to trust machine-generated edits to procedure documentation, surfacing the model's uncertainty felt essential as a trust signal, not just a UX detail. Engineering complexity pushed it out of MVP, but the plan was always to ship it.

[04]

Iterate

A few rounds of feedback from the engineering team shaped the approval routing details across both required sign-off levels. The tool handled the first round internally. For the second, a form had to be routed to the compliance team to kick off their process. Getting that handoff right in the UI took a few passes.

[05]

Handoff + Outcome

I handed the wireframes to the junior engineering team — part of Wells Fargo Small Business Banking Tech's first AI rotation cohort, a program that embedded recent college grads on short-cycle internal AI tool projects.

They fed the wireframes into GitHub Copilot. A working first pass was done in under a week; full AI integration took closer to a month. The tool was socialized broadly and received overwhelmingly positive feedback, both for the UX and for what it represented: a working solution to a pain point that had frustrated the business for years. The projected time savings alone told the story — what had been eating 8–10 hours a week per employee was now closer to 30 minutes of reviewing AI-suggested changes. My manager called me an artist. It was a pretty minimal UI, but I'll take it.

Conclusion + Next Steps

The biggest lesson: know where the real work is. The wireframe was two days. The discovery was months. And the discovery is what made the wireframe possible.

The natural next step was already on the roadmap: an agentic AI layer that could automatically fill out the compliance routing form on behalf of the user. That form was the last human-touched bottleneck in the process, and removing it felt like the logical conclusion of what we'd started. I moved on to a new role before the tool reached production — if I could go back, I'd want to run it with real users, measure the actual time savings, and find where the friction still lives.

// CASE STUDY

PLANR →

READ NEXT ▸