Homelab Evolution: From Hobby to Production Infrastructure
Five years from a single Plex server to a full self-hosted stack: Jellyfin, Immich, n8n, Node-RED, Docker, Tailscale, Caddy. The infrastructure that prototypes everything I later deploy at enterprise scale.
I was sitting on my couch at 11 PM on a Tuesday, watching a Docker container spin up on my homelab server, when I realized something: Everything running in this personal lab had a direct application at work.
Not eventually. Not theoretically. This week.
The Jellyfin media server taught me about streaming optimization principles I'd later apply to medical device data transfer. The Immich photo backup system showed me cloud architecture tradeoffs I'd use in enterprise design decisions. The n8n workflow orchestrator was literally the same tool I was deploying in production at a global medical aesthetics and technology company, just prototyped at home first.
My homelab wasn't a hobby. It was an R&D lab with a 100% transfer rate to professional use.
From 2019 to 2024, I'd evolved it from a basic media server to a production-grade infrastructure that looks like a smaller version of what enterprise operations actually need. And every layer of that evolution taught me something I carried into work.
This is the story of how a basement full of hardware became a laboratory that raised my entire professional game.
2019: The Beginning (Media Server)
I started simply, like everyone does: "I want to own my media instead of streaming it."
Built a basic Jellyfin server on an old laptop:
- Runs on Linux
- Streams video to my phone when I'm away
- Handles multiple simultaneous streams
- Cost: basically free (repurposed hardware)
That's it. Basic setup, one service, one person using it.
The lesson: Simplicity works. Don't build what you don't need yet.
2020: Expansion (Automation Entry)
COVID happened. Suddenly I needed more infrastructure.
Started running Node-RED on the same server:
- Basic workflow automation (yes/no logic, conditional actions)
- IoT device triggers (smart home stuff, light schedules)
- Integration between different systems
This was the turning point. Node-RED was supposed to be a hobby. But I was learning workflow orchestration-the same thinking I'd need in enterprise automation.
By mid-2020, I was testing RMA automation concepts on the homelab version of Node-RED. By end of 2020, I was deploying them at a global medical aesthetics and technology company.
The leap was just a scale difference. The principles were identical.
Cost: Still free (Node-RED is open-source)
2021: Containerization (Learning Docker)
I'd heard about Docker in work contexts. Decided to learn it properly.
Installed Docker Compose and containerized everything:
- Jellyfin in a container
- Node-RED in a container
- Isolated services, coordinated management
This is when it got real.
Docker taught me something enterprise teams struggle with: How do you run multiple services that depend on each other? How do you upgrade one without breaking others? How do you move the whole stack somewhere else if hardware fails?
I spent two months breaking my homelab with bad Docker configurations. Fixed it. Broke it again. Eventually got good.
By 2022, I was using Docker at work for containerizing medical device automation systems. The knowledge transfer was direct: Same syntax, same thinking, same debugging patterns.
The difference: At home I could break things without consequences. At work I was applying proven concepts.
Cost: Still free (Docker is open-source), but consumed significant learning time
2022: Workflow Orchestration (N8N)
This is where the professional and personal converged completely.
Started running n8n (workflow orchestration platform) on the homelab:
- More sophisticated than Node-RED
- Visual workflow builder
- Better error handling
- Closer to production-grade
At the exact same time, I was architecting RMA automation at a global medical aesthetics and technology company using... n8n.
So what did I do? I built the entire RMA workflow at home first.
Tested error cases. Tested recovery paths. Tested what happens when SAP is slow. Tested the full data pipeline from source to output. Fixed bugs in the safe environment of my living room. Then deployed the proven solution at work.
This is the advantage of having your own lab. You can fail fast, iterate, prove concepts before they touch production.
The RMA automation runs at a global medical aesthetics and technology company with 0.01% maintenance effort. It works so reliably because I'd already debugged it a hundred times at home.
Cost: Free (n8n has open-source version), but represented significant architectural learning
2023: Photo Management (Immich)
Added Immich, a self-hosted photo backup and management system.
Why? I wanted to understand cloud architecture without using cloud services:
- How do you handle large-scale uploads?
- How do you optimize storage?
- How do you retrieve historical data efficiently?
- What trade-offs exist between speed and storage?
Immich taught me things that shaped how I'd later think about data pipelines:
- Metadata matters as much as raw data
- Indexing is crucial for retrieval speed
- Batch operations beat single operations
- Cloud abstractions hide complexity that matters
These aren't just photo backup lessons. They're architecture lessons that apply to medical device databases, customer records, any data system.
Cost: Free software, but required learning about PostgreSQL database management
2024: Full Stack (Monitoring, UI.Vision, Network Segmentation)
By 2024, the homelab looked like a real infrastructure:
Services Running:
- Jellyfin: Media streaming (source of streaming optimization knowledge)
- Immich: Photo management (data pipeline architecture)
- n8n: Workflow orchestration (production tool tested at home first)
- Node-RED: IoT automation (basic workflow fundamentals)
- UI.Vision: Browser automation testing (testing ideas for mobile verification tools)
- PostgreSQL: Database backend (data management)
Infrastructure:
- Docker Compose for orchestration
- VPN tunnel for secure remote access
- Network segmentation (critical systems isolated from general traffic)
- Firewall rules for traffic control
- Monitoring stack tracking uptime and resource usage
- Backup strategy: Cloud backups + local redundancy
This looks like enterprise architecture, just smaller.
The Architecture Lesson:
When everything lives on one machine, a crash means everything goes down. So I learned:
- Service isolation (one container crashing doesn't take down others)
- Redundancy (database replicated, critical services have failover)
- Monitoring (you can't fix what you don't know is broken)
- Backup strategies (you can't recover without redundancy)
These aren't homelab problems. These are enterprise problems. But I learned them at home, at small scale, with personal data, where the cost of failure was low.
The Work Connection: Direct Applications
Let me be specific about how this homelab knowledge transferred:
Docker at Work:
- 2022: Learned Docker fundamentals on homelab
- 2023: Applied containerization to medical device automation systems
- Result: Easier deployment, better isolation, simpler scaling
n8n Workflows:
- 2021-2022: Prototyped RMA automation at home
- 2022-2023: Deployed production RMA automation at a global medical aesthetics and technology company
- 2024: Scaling n8n usage across multiple workflows
- Result: 250+ hours annually recovered
Data Pipeline Thinking:
- 2023: Learned about batch operations, indexing, retrieval optimization via Immich
- 2024: Applied principles to large medical device database migrations
- Result: 40% faster data processing, better query performance
Monitoring Strategy:
- 2024: Built homelab monitoring to track system health
- 2024: Implemented production monitoring for critical a global medical aesthetics and technology company systems
- Result: Early warning for issues, proactive maintenance instead of firefighting
Network Security:
- 2024: Configured VPN, network segmentation, firewall rules at home
- 2025: Applied same principles to protecting sensitive medical device systems
- Result: Better security posture without expensive enterprise solutions
Every single skill developed at home had a direct application at work, often within the same year.
Why This Matters: The R&D Lab Advantage
Here's what I realized: My homelab is an R&D lab that costs almost nothing to run.
Traditional R&D requires:
- Budget approval
- Testing environments
- IT infrastructure overhead
- Risk assessment
Homelab R&D requires:
- Old hardware and free software
- Your own time
- Your own willingness to break things
- Your own learning
The advantage is speed and freedom. I can test architectural ideas without waiting for approval. I can make mistakes without risk. I can learn deeply because I'm responsible for everything-building, fixing, maintaining.
And because all the tools are the same ones used in enterprise (Docker, open-source databases, standard Linux), the knowledge transfers directly. I'm not learning hypothetical concepts. I'm learning with the actual tools.
The Philosophy
There's a debate in tech about whether hobbies and work should overlap. Some people say keep them separate. I've always believed integration is valuable-work makes you better at hobbies, hobbies make you better at work.
But I'll be honest: I don't run the homelab as a hobby in the traditional sense. I don't run it because I enjoy managing servers (I don't particularly).
I run it because it's the fastest way to learn enterprise architecture in a safe environment. Every service is a deliberate choice to learn something I'll need at work.
Is it a hobby? Sure, in the sense that it's personal time and personal hardware.
Is it also professional development? Absolutely.
The best learning isn't one or the other. It's both simultaneously.
Current State (2024)
Five years later, the homelab is:
- Reliable: Weeks of uptime between maintenance
- Educational: Still proving concepts before deploying them
- Self-maintaining: Backups run automatically, monitoring alerts me to issues
- Efficient: Runs on a single small server, uses minimal power
And I still use it that way. New architectural idea at work? Build it at home first. Trying a new tool? Test it in the lab. Want to understand how something works? Break it safely at home.
Looking Back: The Transfer Rate
The key insight: In five years, I've never built something in the homelab that didn't eventually apply at work. Never.
Every tool runs both places. Every architecture principle gets tested at home before production. Every failure at home prevented a more expensive failure at work.
This is the opposite of "moonlighting." This isn't me doing something different outside work. This is me building my own laboratory so I can bring better ideas and deeper expertise into work.
Some people read books about architecture. I build it in my basement and debug it at 11 PM on a Tuesday.
Both people learn. I just learn by doing.
The Takeaway
You don't need an enterprise budget to learn enterprise architecture.
You don't need a corporation's infrastructure to understand how systems scale.
You don't need permission to build and break things in the service of learning.
A homelab is one of the highest-ROI educational investments available. Low cost. High depth. Direct applicability.
And the best part: Every time you fix something at home, every time you debug a problem, every time you design for reliability in your own infrastructure-you're building expertise that compounds in your professional work.
That's not a hobby. That's an investment in becoming genuinely better at what you do.
Shi Jun
Senior Regional Technical Operation and Quality Engineer, Medical Technology / Pharma Industry. Building automated systems since 2008. Philosophy: "Using less resource and achieve big time."