About the Founder
Built by an engineer who's been on‑call when production fails.
Ohm Core exists because I kept seeing the same pattern: systems built without operational experience, designed for the demo — not for 3am when everything breaks at once. This practice was founded to fix that.

Background
A career operating systems under real-world pressure.
I've spent my career at the intersection of software engineering and infrastructure — building the systems that high-growth products depend on to stay up, stay fast, and stay cheap enough to survive the growth curve.
From designing the backend architecture of fintech APIs processing financial transactions at volume, to owning the Kubernetes clusters, delivery pipelines, and observability stacks that keep those systems running — I've worked across the full depth of production infrastructure.
What I've learned is that most infrastructure failures are predictable. They come from the same patterns: insufficient redundancy, missing observability, deployments that were never tested under real load, and cost structures that no one owns. Ohm Core exists to solve these before they become incidents.
Domain Expertise
Where the depth is.
Distributed Systems Architecture
Designing systems where components can fail independently — and the product continues to work. Data consistency under partition, failure isolation, graceful degradation that protects users when things go wrong.
Kubernetes & Platform Engineering
Production-grade Kubernetes from the ground up: cluster architecture, RBAC, networking, autoscaling, GitOps delivery pipelines. Platforms built around developer experience — so teams ship faster without compromising reliability.
Reliability Engineering
SLOs designed around what users actually experience. Alerting that pages for real reasons — not noise. Runbooks that make on-call manageable. The goal: mean time to resolution measured in minutes, not hours.
Cloud Infrastructure (AWS & GCP)
Infrastructure as code from scratch or inherited chaos — both require the same fundamentals: reproducibility, least-privilege access, cost accountability, and environments that behave consistently across staging and production.
Observability & Incident Response
Full-stack visibility across services, infrastructure, and data pipelines. Distributed tracing that actually surfaces the root cause. The difference between flying blind on-call and walking into incidents with context.
High-Throughput Backend Systems
Backend architectures designed for real load — not benchmark load. Caching strategies that hold under pressure, database access patterns built for scale, and services that degrade gracefully when upstream dependencies fail.
Engineering Impact
Results from real systems, not hypotheticals.
Traffic capacity unlocked
Autoscaling architectures that absorbed order-of-magnitude traffic spikes without manual intervention or emergency patches.
From autoscaling architecture at M-Tiba healthcare platform, 2021–2023
MTTR reduction
Observability overhauls that cut mean time to resolution from hours to minutes — by surfacing the right signal at the right time.
From observability overhaul across fintech and healthcare platforms
Cloud spend saved
Right-sizing, reserved capacity, and cost frameworks that reduced cloud bills without introducing new failure points.
From cloud cost optimisation engagements across AWS-hosted platforms
Planned downtime deployments
Delivery pipelines and rollout strategies that ship to production continuously — without maintenance windows or user-facing disruption.
From TLIP distributed trade logistics platform, ongoing
How I Work
Systems thinking over tool selection.
Operational experience informs every design decision.
I don't design systems I wouldn't want to be on-call for. Every architecture decision passes through the lens of: what happens when this fails at 2am, and who gets paged?
Reliability is engineered in, not bolted on.
Observability, redundancy, and graceful degradation are not afterthoughts. They're the difference between a one-hour incident and a four-hour outage that makes the news.
Cost is a first-class engineering concern.
Cloud bills grow when no one owns them. Every infrastructure decision carries a cost implication — and I treat that as part of the engineering problem, not someone else's spreadsheet.
Work With Ohm Core
Looking for an engineer who's operated systems at scale?
Every engagement starts with understanding your system and where it's at risk. No slide decks — just a detailed audit of what needs to change, and how we'll fix it.