Business logic based data archival & retention - Pega Platform

Business logic based data archival & retention - Pega Platform

B2B

B2B

B2B

Enterprise App

Enterprise App

Enterprise App

SaaS

SaaS

SaaS

Web App

Web App

Web App

Low-code

Low-code

Low-code

*Data and designs have been altered to honour the NDA

Product

Pega Infinity

My Role

Lead Designer

Team

1 PD, 2 PM and 5 Engineers

Timeline

3–4 months (research → design → leadership alignment → dev handoff → QA)

TL;DR

Enterprises on Pega were forced to set a single, maximum data retention period for all regions, ensuring compliance but driving up storage costs. I designed a Retention Policy Builder that lets providers create targeted, jurisdiction‑aware rules (e.g., EU = 7 years, US = 5 years, TX = 10 years). This reduced wasted storage, simplified audits, and preserved compliance. I led research, design, leadership alignment, and developer handoff, ensuring the feature shipped true to design.

My Contributions

End-to-end UX Design

Cross-functional Collaboration

Enterprise Workflow Design

Wireframes & Prototyping

Leadership Alignment

Leadership Alignment

Design Handoff

System Thinking

Usability Testing

What is Pega?

Pega is a leading enterprise low-code application development platform. Companies use it to build case-centric apps like insurance claims, customer service workflows, or compliance processes without heavy custom coding. It’s widely adopted in industries like insurance, banking, and healthcare for its ability to handle complex, regulated business operations at scale.

PROBLEM SPACE

What is data archival & retention?

Large enterprises build case‑centric applications on Pega. Over time these cases accumulate

customer records, claims, attachments and audit logs which all live in primary database storage.

Archival and data retention in Pega Platform were based on number of days, providers were able to define the threshold in terms of number of days but this remained same for every region.

What was not working?

Industry and local laws require minimum retention windows for archived data, but those windows vary

by country and by sub‑jurisdiction (U.S. states, for example).

The workaround used by many customers was to set a single retention period equal to the maximum

required window across their footprint.

Why it mattered?

The workaround was inefficient and had higher cost involved.

Let’s see what it costs a large U.S. insurer to hold on to data for just three extra years.

Claims in 3 years

12,00,00,00

12,00,00,00

Data usage per claim

100MB

100MB

Data required for 3 years

1144TB

1144TB

Approximate storage cost

$2,20,000

$2,20,000

Total cost (storage + other overhead costs)

$3,76,000

$3,76,000

Cost

Long retention in primary/secondary storage increases monthly and long‑term cloud spend.

Operational burden

Large archives slow searches, backups, and add complexity during discovery and audits.

Business goal

Enable customers to remain compliant while minimizing unnecessary storage costs.

User goal

Let providers create policies that match real legal obligations, without engineering changes.

Cost

Long retention in primary/secondary storage increases monthly and long‑term cloud spend.

Operational burden

large archives slow searches, backups, and add complexity during eDiscovery and audits.

Business goal

Enable customers to remain compliant while minimizing unnecessary storage costs.

User goal

Let platform administrators and compliance officers express retention rules that match real legal

Research & discovery

Because the problem sits at the intersection of product, data infra and regulation, I used a mixed

methods approach:

Stakeholder interviews

1:1s with PMs, Provider success team and Compliance to understand risks, constraints, and success criteria.

System walkthroughs

Engineers walked me through existing archive/purge pipelines, eligible items for archiving, and edge cases.

Audit & policy review

We collected example retention rules from sample customers and public guidance to enumerate common retention durations (e.g., 3–10 years depending on type and jurisdiction).

Usability sketching sessions

Quick sessions with PMs/engineers to iterate on the condition builder metaphors (rule compiler vs visual builder vs spreadsheet style).

Design insight

Providers wanted a low‑cognitive way to express complex, conditional rules (region + class + data type) that felt like configuring a policy, not writing code.

SOLUTION

Design principles

Declarative

Guided controls

rather than free‑form code.

Speed

Leverage existing libraries for rapid development

Composability

Allow multiple targeted policies

Wireframes

After multiple brainstorming sessions with stakeholders we drilled down on key things needed

Support for legacy policy method

Option to create new business logic

Policy threshold inputs

Faster policy edit for under-development projects

Testing mode to trigger policies quickly

Key screens & interactions

  1. Archival and data retention option in settings of each case type, providers can opt for archival if needed

  1. Support for legacy archival and dedicated mode for rapid testing.

  1. Simplified user flow by giving clear option between flat and custom logic, reducing decision friction.

  1. Custom logic made simple: Intuitive Condition Builder for variable, operator and value rules.

  1. Added threshold input: Lets users fine-tune each logic with day-based limits.

RESULT

Outcome & impact

*Dummy numbers due to NDA

Qualitative outcomes

  • Customers could author targeted, jurisdictional retention policies without developer support.

  • Providers reported clearer audit trails and faster compliance reviews.

  • Engineering reduced the need for ad‑hoc customer scripts and custom deployments

Quantitative impact

  • Data usage on secondary storage decreased by 30%.

  • Provider overhead cost reduced by 2%.

What I learnt?

  • Small configuration power given to operators can unlock outsized operational and cost benefits often an order of magnitude more impactful than a purely cosmetic UX improvement.

  • Aligning early with engineering on data models and sampling strategies is essential for preview accuracy and performance.

Open in desktop browser to view full case-study

Let's connect

Resume

Akram Nawaaz

Hyderabad, Telangana, India

akram@nawaaz.in

Let's connect

Resume

Akram Nawaaz

Hyderabad, Telangana, India

akram@nawaaz.in

Let's connect

Resume

Akram Nawaaz

Hyderabad, Telangana, India

akram@nawaaz.in