← All posts
Corporate Achievements

How I Eliminated 70% of Manual Work: Enterprise Automation Case Study

Senior management was spending 250+ hours annually on repetitive data entry. I spent 40 hours building an n8n workflow that eliminated it. The system has run for 2+ years with 15 minutes monthly maintenance.

I was standing in a conference room at a global medical aesthetics and technology company, watching our Regional Director spend eight minutes manually copying data from an email into Excel, then into our database system, then into a country-specific tracker. When he finished, he leaned back and said, "This is my Tuesday. Sixteen emails like this before lunch."

Sixteen emails. At eight minutes per email. Senior management time. Every single day.

I watched the pattern repeat: Email arrives → Open it → Extract key fields by hand → Enter into SAP → Enter into regional tracker → Email confirmation back. No variation. No exceptions. Pure repetition. The same workflow, hundreds of times per year, performed by people being paid to think strategically, not to type.

I realized in that moment: This wasn't just inefficient. This was preventable.

By the time I finished optimizing that workflow, we'd recovered 250+ hours of senior management time annually. The entire system ran on open-source tools that cost nothing. It required roughly 15 minutes per month to maintain. And the best part? After the initial 40 hours to build it, the system essentially disappeared into the background. No one noticed it anymore-they just noticed they weren't drowning in data entry.

This is the story of how I eliminated 70% of manual work across an eight-country operation using nothing but pattern recognition and an open-source automation platform.

The Problem: Hidden in Plain Sight

When I joined a global medical aesthetics and technology company as Senior Regional Engineer, Medical Technology (APAC) in 2020, the company had a significant but invisible problem: senior management was doing manual data processing work that should have been automated.

The RMA (Return Merchandise Authorization) process-handling product complaints, device returns, warranty claims-was the core of our regional operations. Every complaint that came in followed the same basic pattern:

  1. Email from country manager or distributor
  2. Manual extraction of key information (device model, serial number, failure type, customer location)
  3. Entry into SAP (the corporate system)
  4. Entry into regional tracking spreadsheet
  5. Data fed into country-specific databases
  6. Confirmation email sent back

That's 8-12 manual touchpoints per complaint. And we were getting hundreds per month across APAC.

The time cost was enormous:

  • Per complaint: 5-8 minutes of senior management attention
  • Monthly volume: ~40-50 complaints processed by management-level staff
  • Annual time investment: 250+ hours of expensive talent doing clerical work
  • Annual cost: Roughly SGD 150K+ in salary hours spent on data entry

But here's the critical thing: No one thought this could be automated. It had always been done manually. It was "just how the process worked." When I asked colleagues if anyone had considered automating it, I got blank stares. The assumption was that emails were too unstructured, data too varied, and the regional differences too complex.

I looked at the problem differently. I didn't see unsolvable complexity. I saw patterns.

The Assessment: Finding the Signal in the Noise

Before building anything, I did what I always do: I observed the actual workflow instead of assuming I understood it.

I spent a week doing the data entry myself-literally. I sat with colleagues as they processed emails, watched how they extracted information, noticed which fields they prioritized, observed where they made mistakes. I timed each step. I counted touchpoints. I documented every variation.

Here's what I found:

The Patterns:

  • 80% of complaints followed a standard structure: "Device X model Y serial Z has issue W in country N"
  • Common failure types appeared repeatedly: Same device models, same failure modes, same documentation requirements
  • Data sources were consistent: Outlook emails used similar language, spreadsheet formats were standardized, database fields were fixed

The Constraints:

  • No IT budget for new tools
  • Existing systems had to stay (SAP wasn't going anywhere)
  • Regional operations needed different handling (Malaysia processes differently than Singapore)
  • GxP compliance requirements (medical device industry-couldn't break audit trails)

The Opportunity: Email structure was consistent enough to parse. The information extraction was scriptable. The downstream destinations were known and fixed. And the error rate on edge cases was manageable if we designed for human review of anything uncertain.

The Insight: I could automate the 80% use case perfectly and flag the 20% for human attention. That alone would eliminate most of the manual burden.

The Solution: N8n and the Open-Source Advantage

I chose n8n-an open-source workflow automation platform-for three reasons:

  1. No licensing cost (aligned with "using less resource and achieve big time")
  2. Self-hosted capability (no external dependencies, full control)
  3. Flexible enough for complex regional workflows (not oversimplified like Zapier, not oversized like enterprise RPA tools)

The architecture was elegant in its simplicity:

Workflow 1: Email Monitoring & Parsing

  • Monitor Outlook inbox for specific keywords indicating RMA requests
  • Extract email content using pattern matching (JavaScript regex)
  • Identify key fields: Device model, serial number, issue description, requestor location
  • Flag confidence level (high confidence vs. needs human review)

Workflow 2: Classification & Validation

  • Auto-classify by complaint type (hardware defect, software issue, misuse, damage)
  • Route to appropriate database based on device model
  • Apply region-specific business rules (different countries have different warranty terms)
  • Generate standardized data structure for downstream systems

Workflow 3: Multi-Destination Distribution

  • Push high-confidence data to SAP via available API
  • Update regional tracking spreadsheet (CSV generation)
  • Generate country-specific database entries
  • Create automated response emails to requester with ticket number and ETA

Error Handling & Monitoring:

  • Low-confidence matches flagged for human review (not silently wrong)
  • Dashboard showing processing statistics: Volume processed, error rate, human review rate
  • Alert system for any failures in the pipeline
  • Monthly review of flagged cases to improve regex patterns

The entire system was built in approximately 40 hours of work:

  • Week 1: Workflow design and email pattern analysis (10 hours)
  • Week 2: Proof-of-concept with test emails (8 hours)
  • Week 3: Refinement of parsing logic and error handling (12 hours)
  • Week 4: Live data testing and rule adjustment (10 hours)

I didn't build this perfectly the first time. I iterated. I tested. I fixed. But the initial version was good enough to prove value.

Implementation: Starting Small, Scaling Quietly

I made a deliberate choice not to announce this as a "major project" or "system transformation." I didn't schedule meetings. I didn't create a project charter. I just... built it.

Phase 1: Personal Validation I set the system to process my own RMA workflow first. For two weeks, I compared manual output against automated output. When I found discrepancies, I adjusted the rules. When the system caught things I'd missed, I trusted it. Once I was confident, I moved forward.

Phase 2: Volunteer Adoption I mentioned to a colleague, "I've built something that might reduce manual work. Want to try it?" I didn't pressure. I offered. When they saw their five-minute task suddenly take 30 seconds, they were interested.

Phase 3: Organic Spreading Colleagues asked colleagues. Managers asked how to use it. I gave a 15-minute training session to interested people. I created simple documentation. I positioned it as "process improvement," not "replacement technology."

Phase 4: Regional Rollout Once I had proof of concept with Singapore operations, I adapted the regional rules for Malaysia, Philippines, Indonesia, Vietnam, and India. Each country had slightly different requirements, but the core workflow remained the same.

The key was positioning: I never made anyone feel like their job was being eliminated. I positioned it as "the system handles the routine stuff, now you can focus on complex cases." That's true-and people responded positively.

The Results: Proof That Systems Compound

Here are the measurable outcomes after full deployment:

Time Savings:

  • Manual touchpoints per complaint: 8-12 → 2-3 (80% reduction)
  • Time per complaint: 5-8 minutes → 1 minute (downstream verification only)
  • Annual time recovered: 250+ hours of senior management time
  • Monthly maintenance: ~15 minutes to monitor and adjust

Quality Improvements:

  • Error rate: Decreased (human fatigue eliminated)
  • Consistency: Increased (same rules applied across regions)
  • Compliance: Strengthened (audit trail automatically maintained)

Operational Impact:

  • Response time to customers: Faster (automatic ticket generation)
  • Data availability: Improved (centralized dashboard of all regional cases)
  • Regional consistency: Standardized (same process everywhere)

Sustainability:

  • The system has now run for 2+ years with minimal maintenance
  • I've made maybe 5 updates total, each taking 15 minutes
  • New team members learn the system in 30 minutes
  • It's self-documenting (the workflow visually shows what's happening)

ROI Calculation:

  • Time invested: 40 hours
  • Time recovered per year: 250 hours
  • Payback period: Less than two weeks
  • ROI over two years: 12,500% (250 hours/year × 2 years ÷ 40 hours investment)

And none of this required asking for budget. No software purchases. No vendor negotiations. No IT department dependencies.

The Methodology: How You Can Do This Too

This case study works because it followed a principle I've applied since age 18 running game automation: Don't try to solve everything. Solve the 80% use case perfectly.

Here's the methodology distilled into actionable steps:

Step 1: Observe, Don't Assume

Sit with the actual workflow being performed. Don't ask "how should this work?" Watch how it actually works. Time each step. Count variations. Identify what's routine versus exceptional.

Most automation fails because people automate their theory of the process, not the actual process. There's a huge difference.

Step 2: Target the Repetitive 80%

Don't try to automate every edge case. The 20% of unusual situations will cost you 80% of your build time if you try to handle them. Instead:

  • Automate the common case flawlessly
  • Design excellent handling for edge cases (manual review)
  • Iterate: As you see what edge cases actually occur, add rules

This is why my system handles 80% automatically and flags 20% for human review. That achieves 80% time savings with maybe 20% of the complexity.

Step 3: Use What You Have

Before requesting budget:

  • Check if existing tools can be extended (Excel scripts, platform APIs)
  • Explore open-source alternatives (n8n, Node-RED, UI.Vision)
  • Prove value with free tools before asking for enterprise licenses

This proved critical at a global medical aesthetics and technology company. I built something that saved 250 hours annually without touching the IT budget. That's hard to argue with.

Step 4: Build Small, Iterate Fast

  • Proof-of-concept with real data from day one (not toy examples)
  • Test on yourself first before exposing to others
  • Gather feedback from actual users (not from executives)
  • Iterate based on real failures, not theoretical edge cases

My first version handled maybe 70% correctly. Iteration got it to 95%+. That iteration was crucial and only possible with real data.

Step 5: Design for Maintenance

This is where most automation projects fail. They work great for six months, then break when someone leaves or a system changes.

Design for simplicity:

  • Clear logic that someone else could understand
  • Documentation for future-you (not written for today-you)
  • Error alerts (silent failures are nightmares)
  • Monitoring dashboard showing health

I built this assuming I might leave or move teams. Would it survive? Yes. That's good design.

What This Actually Represents

This case study is more than just "I built an automation system." It's a demonstration of three things I care about deeply:

First: Systems Thinking Scales

I learned systems thinking at 18 running game automation in a polytechnic library. I'm applying the exact same principles at 38 in a multinational pharmaceutical company. The scale changed. The principles didn't.

Second: "Using Less Resource and Achieve Big Time" Actually Works

This is my core philosophy. I didn't get an enterprise RPA platform (expensive, vendor lock-in). I didn't hire consultants (expensive, external dependency). I didn't ask for budget (none available). I used an open-source tool, my own time, and pattern recognition. Result: 250 hours recovered annually.

Third: The Best Work Is Invisible

When I finished this project, I wanted no credit. No announcement. No recognition. I wanted the system to work so smoothly that nobody remembered it was ever manual. That's craft. That's what "work as craft, not career" means.

Most people optimize for visibility (so they get promoted). I optimize for invisibility (so the work is excellent and timeless). It's a different motivation entirely.

The Larger Context

This project didn't happen in isolation. It was built on 15 years of accumulated knowledge:

  • 2008: Game automation taught me that patterns could be extracted and systematized
  • 2009-2010: Hardware reconditioning taught me that quality assessment and workflow optimization were scalable skills
  • 2013-2016: a B2B industrial automation company taught me how to operate at corporate scale
  • 2016-2019: a regional medical devices distributor taught me medical device operations and vendor relationships
  • 2018: E-commerce launch proved automation principles worked across domains
  • 2020: COVID recovery proved crisis response capability

Each of these built the foundation for this 70% efficiency gain in 2022.

But here's the thing: You don't need 15 years of background to apply this methodology. You just need one thing: the belief that inefficient processes deserve better.

The Reflection

Two years after deploying this system, it's still running. It still saves 250+ hours annually. It still requires about 15 minutes per month of maintenance. Most days, I don't think about it at all.

That's exactly how I wanted it to be.

The best automation is invisible. It's so reliable, so seamless, that people forget it was ever manual. They just notice they have their time back. They notice the process is faster. They notice fewer errors. They don't remember that a human used to do this by hand.

That's the goal. Not recognition. Not credit. Just systems that work so well they disappear.


The Takeaway

If you're reading this and thinking "we have the same problem at my company," you probably do. Most organizations have dozens of repetitive manual workflows that could be automated for free using open-source tools.

The barrier isn't technical. The barrier is usually permission-the belief that you're allowed to build this.

Here's my suggestion: Pick one workflow. Observe it for real. Build a proof-of-concept. Show the time savings. Let the results speak.

Don't ask for permission in advance. Ask for forgiveness after you've proven it works.

That's how you go from "this is just how things are done" to "wait, this is automated now?"

Shi Jun

Shi Jun

Senior Regional Technical Operation and Quality Engineer, Medical Technology / Pharma Industry. Building automated systems since 2008. Philosophy: "Using less resource and achieve big time."