
Role: Lead Product Designer
Tools: usertesting.com, Miro, Figma
The existing Action Plan presented security recommendations as a context-free list, causing users to feel overwhelmed and ignore the feature entirely. With low engagement threatening the product's value and its potential sale to enterprise partners, the feature needed to be redesigned to motivate action and support integration into a merged platform.
I led the UX strategy and design. Working with data scientists, I mapped how the risk algorithm assessed threats and assigned actions, which informed a new organizational structure. I categorized actions by risk type (e.g., identity theft, account takeover), providing context about why users faced each risk before showing recommended actions. I synthesized research findings with our UX researcher, validated designs through user testing, and presented solutions in stakeholder sessions to ensure feasibility.
The redesigned Action Plan became a key differentiator in enterprise sales discussions and contributed to securing a multimillion-dollar partner deal. I established a measurement framework tracking adoption rates and task completion across risk categories to evaluate success post-launch.

Personally identifiable information is continuously tracked through various monitoring methods, including dark web and credit monitoring.

Users are alerted when their information is detected in databases of dark web exposures.

Using these exposure data points, a score is calculated to show users their estimated risk level, providing context for the recommended actions.

Recommendations are provided to mitigate identified risks and improve their ID Safety Score.
This feature is the focus of this case study.
Product leadership identified the Action Plan as underperforming with users. Heuristic analysis confirmed major usability gaps. With enterprise partnerships depending on the feature's value, a redesign became critical.
A diary study is a longitudinal research method where participants document their thoughts, behaviors, and experiences over time. This approach provides deep insights into user sentiment, pain points, and behavioral patterns in real-world contexts.
This method was chosen to capture users’ initial sentiments before interacting with the product, observe real-time reactions during first-time use, and track behavioral shifts over an extended period, uncovering how perceptions and engagement evolve.


-1.png)

While alerts were fulfilling the “inform me” need, the Action Plan (meant to guide users) was unclear, unmotivating, and untrusted after initial exposure. Engagement dropped sharply, leaving users without a sense of ongoing protection.
This study was designed and led by our UX researcher; I contributed by synthesizing findings and ensuring the implications shaped the alert redesign.
Heuristic analysis also revealed visibility of system status failures—the Action Plan lacked context about what risks users faced and why actions mattered, leaving users unable to assess their situation.
.png)
To ascertain the context for the action plan, users must navigate to separate tabs to access ID Safety Score, Top Risks, exposure history and exposed credentials. This separation makes it difficult for users to see the relevance of the recommendations and how those recommendations relate to their unique exposure history.

In the original design, the Action Plan was buried below the fold on the dashboard and not directly accessible from main navigation. Users struggled to locate the feature, reducing engagement and preventing it from fulfilling its purpose as a key differentiator.

Applying the Fogg Behavior Model revealed why users weren't acting on recommendations:
Low Motivation: Users didn't understand why they were at risk or why specific actions mattered. Without context connecting actions to their personal threat landscape, the feature felt generic and irrelevant.
Low Ability: The long, undifferentiated list made it unclear where to start or which actions were most important. Users faced high cognitive load with no clear entry point.
Weak Connection Between Prompt and Action: While alerts successfully triggered awareness of threats, they led to a generic action list with no obvious connection to the specific alert. Poor discoverability also meant users who wanted to find the Action Plan proactively struggled to locate it.

The Challenge: Users faced an overwhelming, undifferentiated list of 20+ security actions with no clear prioritization or starting point.
The Approach: Stakeholders initially suggested grouping tasks as "critical" vs "proactive." After auditing the actions, I found this wouldn't work—only a handful qualified as proactive, while most would be labeled critical, which wouldn't reduce overwhelm.
I consulted with a data scientist to understand how the algorithm worked:
Since actions were already tied to specific top risks in the algorithm, I could organize them by risk category—giving users a conceptual framework rather than an arbitrary list.
The Solution: I grouped actions by risk type (Credit Card Fraud, Tax Fraud, Account Takeover, etc.) and provided context about each risk before showing recommended actions. This transformed an overwhelming list into a structured, understandable system.



The Challenge: Actions appeared without explanation of why users were at risk or how actions related to their specific threats.
The Approach: I needed to ensure the Action Plan was positioned alongside the context users needed to understand their risks—the ID Safety Score and exposure history. The question was how to structure this information so users could easily move between understanding their risk and taking action.
The Solution: I placed the Action Plan on the same page as the ID Safety Score and exposure data, allowing users to reference their risk level and breach history while reviewing recommended actions. Within each risk category, I provided risk descriptions and explanations of which exposed credentials created vulnerability.
"Am I at risk? How do you know? What should I do?" I structured each risk section to answer these questions in sequence—showing the score, exposure history, risk explanation, and then the action checklist all in one view.

This keeps critical context accessible while users review actions, making recommendations feel personally relevant rather than generic.
Why this mattered: This addressed the low motivation barrier by keeping threat relevance clear. Users could verify why they were at risk while reviewing actions.
The Challenge: Users needed to understand which risks were most urgent so they could prioritize their efforts.
The Approach: I explored multiple ways to communicate risk severity:

Through rapid iteration, I tested different visual treatments to find what was most clear and motivating.
Solution: I communicated risk in terms of ID Safety Score impact—showing users how many points each risk represented and how completing actions would improve their score. This made risk measurable, transparent, and directly tied to a metric users could improve.

The Challenge: Users needed to complete multiple tasks across different risk categories. How could we make this feel like a guided flow rather than isolated chores?
The Approach: I explored interaction patterns for:
The Solution: I designed a streamlined workflow where:
.png)
Tested two concepts:
.png)
Option A: Carousel-based grouping with top risks featured
Option B: Long scrolling list with no grouping but detailed score UI
Key findings:
Synthesis: I combined the risk-based grouping structure from Option A with the accordion interaction pattern and detailed scoring from Option B, creating a hybrid solution that addressed user preferences from both concepts.


The Action Plan moved above the fold on the dashboard and became directly accessible from main navigation. "Your Top Risks" provides an at-a-glance view of progress and a clear entry point to the full experience.
Why this mattered: This strengthened the prompt-to-action connection—users could now easily find and return to their action plan when motivated to act.
.png)
Tasks are grouped by top risk (Credit Card Fraud, Tax Fraud, etc.) using accordions instead of carousels. Each risk section provides context about the threat and shows which exposed credentials created vulnerability before presenting actions.
Estimated time and ID Safety Score points appear with each action group, creating clear expectations and motivation to complete tasks.
Why this mattered: This addressed the low ability barrier by breaking an overwhelming list into meaningful categories, and the low motivation barrier by making threats personally relevant with clear impact visibility.

Clicking "Take Action" leads to a focused, distraction-free task view. Users can navigate between tasks via tabs without losing context of which risk they're addressing. The ID Safety Score updates in real-time as tasks are completed, providing immediate feedback on progress.
Why this mattered: This maintained momentum by reducing cognitive load (focused view) while providing continuous reinforcement (score updates) that actions have tangible impact.
Impact:
Future KPIs: