Action Plan: Redesigning Identity Protection Recommendations

Role: Lead Product Designer
Tools: usertesting.com, Miro, Figma

Problem ⚡

The existing Action Plan presented security recommendations as a context-free list, causing users to feel overwhelmed and ignore the feature entirely. With low engagement threatening the product's value and its potential sale to enterprise partners, the feature needed to be redesigned to motivate action and support integration into a merged platform.

Role & Collaboration 🎨

I led the UX strategy and design. Working with data scientists, I mapped how the risk algorithm assessed threats and assigned actions, which informed a new organizational structure. I categorized actions by risk type (e.g., identity theft, account takeover), providing context about why users faced each risk before showing recommended actions. I synthesized research findings with our UX researcher, validated designs through user testing, and presented solutions in stakeholder sessions to ensure feasibility.

Outcome & Impact 🚀

The redesigned Action Plan became a key differentiator in enterprise sales discussions and contributed to securing a multimillion-dollar partner deal. I established a measurement framework tracking adoption rates and task completion across risk categories to evaluate success post-launch.

Context and Research

Platform Context: Key Components
Monitoring

Personally identifiable information is continuously tracked through various monitoring methods, including dark web and credit monitoring.

Alerts

Users are alerted when their information is detected in databases of dark web exposures.

ID Safety Score

Using these exposure data points, a score is calculated to show users their estimated risk level, providing context for the recommended actions.

Action Plan

Recommendations are provided to mitigate identified risks and improve their ID Safety Score.

This feature is the focus of this case study.

Business Context

Product leadership identified the Action Plan as underperforming with users. Heuristic analysis confirmed major usability gaps. With enterprise partnerships depending on the feature's value, a redesign became critical.

Research and Insights
Method: Diary Study

A diary study is a longitudinal research method where participants document their thoughts, behaviors, and experiences over time. This approach provides deep insights into user sentiment, pain points, and behavioral patterns in real-world contexts.

Day 1
Kick off
1:1 interview to identify needs
Day 2
First time impressions of product
Day 3
Regular usage begins
Day 30
Final Sentiment Survey
Post-
research
Analysis

This method was chosen to capture users’ initial sentiments before interacting with the product, observe real-time reactions during first-time use, and track behavioral shifts over an extended period, uncovering how perceptions and engagement evolve.

It involved:
1:1 Interviews
Unmoderated Testing
Affinity Map Analysis
Key Insight
Through diary study interviews, we identified the core Job To Be Done:

"Assure me that my identity is as protected as possible"

This assurance depends on two things:
  1. "Inform me when significant or suspicious events occur."
  2. Guide me on how to respond effectively.”
The Problem: Guidance Falls Short

This study was designed and led by our UX researcher; I contributed by synthesizing findings and ensuring the implications shaped the alert redesign.

Heuristic analysis also revealed visibility of system status failures—the Action Plan lacked context about what risks users faced and why actions mattered, leaving users unable to assess their situation.

The Problem Visualized

To ascertain the context for the action plan, users must navigate to separate tabs to access ID Safety Score, Top Risks, exposure history and exposed credentials. This separation makes it difficult for users to see the relevance of the recommendations and how those recommendations relate to their unique exposure history.

In the original design, the Action Plan was buried below the fold on the dashboard and not directly accessible from main navigation. Users struggled to locate the feature, reducing engagement and preventing it from fulfilling its purpose as a key differentiator.

Understanding the Problem Through Behavioral Design

Applying the Fogg Behavior Model revealed why users weren't acting on recommendations:

Low Motivation: Users didn't understand why they were at risk or why specific actions mattered. Without context connecting actions to their personal threat landscape, the feature felt generic and irrelevant.

Low Ability: The long, undifferentiated list made it unclear where to start or which actions were most important. Users faced high cognitive load with no clear entry point.

Weak Connection Between Prompt and Action: While alerts successfully triggered awareness of threats, they led to a generic action list with no obvious connection to the specific alert. Poor discoverability also meant users who wanted to find the Action Plan proactively struggled to locate it.

Design Process

Four design considerations guided my exploration:
  1. Organizing Long List of Tasks
  2. Content Placement and Context
  3. Communicating Risk
  4. Workflow Optimization
Organizing Long List of Tasks

The Challenge: Users faced an overwhelming, undifferentiated list of 20+ security actions with no clear prioritization or starting point.

The Approach: Stakeholders initially suggested grouping tasks as "critical" vs "proactive." After auditing the actions, I found this wouldn't work—only a handful qualified as proactive, while most would be labeled critical, which wouldn't reduce overwhelm.

I consulted with a data scientist to understand how the algorithm worked:

  • It identifies which breaches exposed the user
  • Determines which personal information was exposed
  • Maps exposures to specific identity crime risks (top risks)
  • Prescribes actions to prevent those crimes
  • Assigns an ID Safety Score based on composite risk

Since actions were already tied to specific top risks in the algorithm, I could organize them by risk category—giving users a conceptual framework rather than an arbitrary list.

The Solution: I grouped actions by risk type (Credit Card Fraud, Tax Fraud, Account Takeover, etc.) and provided context about each risk before showing recommended actions. This transformed an overwhelming list into a structured, understandable system.

Content Placement and Context

The Challenge: Actions appeared without explanation of why users were at risk or how actions related to their specific threats.

The Approach:
I needed to ensure the Action Plan was positioned alongside the context users needed to understand their risks—the ID Safety Score and exposure history. The question was how to structure this information so users could easily move between understanding their risk and taking action.

The Solution:
I placed the Action Plan on the same page as the ID Safety Score and exposure data, allowing users to reference their risk level and breach history while reviewing recommended actions. Within each risk category, I provided risk descriptions and explanations of which exposed credentials created vulnerability.

"Am I at risk? How do you know? What should I do?" I structured each risk section to answer these questions in sequence—showing the score, exposure history, risk explanation, and then the action checklist all in one view.

This keeps critical context accessible while users review actions, making recommendations feel personally relevant rather than generic.

Why this mattered:
This addressed the low motivation barrier by keeping threat relevance clear. Users could verify why they were at risk while reviewing actions.

Communicating Risk

The Challenge: Users needed to understand which risks were most urgent so they could prioritize their efforts.

The Approach:
I explored multiple ways to communicate risk severity:

  • Number of exposed data points (quantifiable but not intuitive)
  • Visual progress bars showing risk percentage
  • Color-coded severity labels (severe/moderate/low likelihood)

Through rapid iteration, I tested different visual treatments to find what was most clear and motivating.

Solution:
I communicated risk in terms of ID Safety Score impact—showing users how many points each risk represented and how completing actions would improve their score. This made risk measurable, transparent, and directly tied to a metric users could improve.

Workflow Optimization

The Challenge: Users needed to complete multiple tasks across different risk categories. How could we make this feel like a guided flow rather than isolated chores?

The Approach: I explored interaction patterns for:

  • Task navigation (next/previous vs menu vs modal)
  • Completion feedback (what happens when user marks a task complete?)
  • Progress celebration (what happens when checklist is finished?)

The Solution: I designed a streamlined workflow where:

  • Users can navigate tasks sequentially or jump to specific ones
  • Completing a task shows immediate feedback and score impact
  • Finishing a checklist provides celebration and shows updated risk status
  • Users maintain flexibility in task order rather than forced linear progression
Usability Testing & Validation

Tested two concepts:

Option A: Carousel-based grouping with top risks featured
Option B: Long scrolling list with no grouping but detailed score UI

Key findings:

  • Testers preferred top-risk grouping (Option A) but found carousels clunky and obscured tasks
  • Testers preferred more detailed score UI (Option B)
  • Testers wanted autonomy in selecting actions rather than sequential navigation (Option B)
  • No significant usability issues with either approach

Synthesis: I combined the risk-based grouping structure from Option A with the accordion interaction pattern and detailed scoring from Option B, creating a hybrid solution that addressed user preferences from both concepts.

Final Solution

Improved Discoverability

The Action Plan moved above the fold on the dashboard and became directly accessible from main navigation. "Your Top Risks" provides an at-a-glance view of progress and a clear entry point to the full experience.

Why this mattered: This strengthened the prompt-to-action connection—users could now easily find and return to their action plan when motivated to act.

Risk-Based Organization with Context

Tasks are grouped by top risk (Credit Card Fraud, Tax Fraud, etc.) using accordions instead of carousels. Each risk section provides context about the threat and shows which exposed credentials created vulnerability before presenting actions.

Estimated time and ID Safety Score points appear with each action group, creating clear expectations and motivation to complete tasks.

Why this mattered: This addressed the low ability barrier by breaking an overwhelming list into meaningful categories, and the low motivation barrier by making threats personally relevant with clear impact visibility.

Focused Task Experience

Clicking "Take Action" leads to a focused, distraction-free task view. Users can navigate between tasks via tabs without losing context of which risk they're addressing. The ID Safety Score updates in real-time as tasks are completed, providing immediate feedback on progress.

Why this mattered: This maintained momentum by reducing cognitive load (focused view) while providing continuous reinforcement (score updates) that actions have tangible impact.

Measuring Success

Impact:

  • Transformed a flat, contextless checklist into a transparent, guided system.
  • Design strategy was adapted and integrated into white-label product by another designer during implementation.
  • Design work contributed to sales presentations that secured purchase by a major bank, resulting in a multi-million-dollar contract.

Future KPIs:

  • Page engagement → frequency and repeat visits to the Action Plan.
  • Task completion → percentage of recommended actions users follow through on.
  • Feature adoption → overall proportion of users engaging compared to the prior version.
  • Embedded surveys → A quick "was this helpful" to collect targeted feedback.