Back to Case Studies
Engineering

Data Infrastructure & Internal Tools

Built ETL pipelines, flat table architecture, and internal AI tools that made teams self-sufficient

40%
less analysis overhead
Company
Intellect
Role
Senior Data Analyst

The Problem

Everything was manual and bottlenecked

As Intellect grew, the analytics workflow was entirely manual and bottlenecked on the data team. Monthly provider payouts for 500+ providers were calculated manually by finance. Event data was scattered across multiple systems with no standardised taxonomy.

Product and growth teams filed tickets for every data request, creating a queue that slowed decision-making. There was no self-serve capability.

500+ provider payouts calculated manually by finance each month

Event data scattered across systems with no standardised taxonomy

Every data request required a ticket, creating long queues

Zero self-serve capability across the organisation

The Approach

Build infrastructure, not just dashboards

Rather than just building dashboards, I focused on building the infrastructure layer that would make the entire organisation data-self-sufficient — pipelines, standardised data models, and tools that non-technical teams could use independently.

Core Philosophy

Give teams the tools and data to answer their own questions

Pipelines Standardised Models Self-Serve Tools Automation

What I Built

Four systems that changed how teams work

01

ETL Pipelines & Cron Jobs

Automated monthly payouts for 500+ providers, integrating base pay, bonus logic, and utilisation tracking. Replaced a fully manual finance workflow that took days each month.

02

Flat Table Architecture

Designed a standardised event taxonomy and flat table structure across the platform. Every product event, session event, and operational metric flowed into clean, queryable tables.

03

AI-Powered Pricing Tool

Built an internal tool for the Revenue and Partnerships team that takes deal inputs and outputs client pricing using embedded business logic. Replaced a complex manual spreadsheet process.

04

Self-Serve Algorithm Testing Toolkit

Built a batch simulator, interactive CLI, and sensitivity analyser so product and growth teams could run scenario testing on the recommendation algorithm independently without filing a data request.

The Result

Teams became self-sufficient

40%

Analysis overhead cut — teams self-served instead of waiting for data team

Days → Auto

Monthly payout process went from days of manual work to automated pipeline

1 Taxonomy

Standardised event taxonomies enabled consistent metrics across all teams

3x

Product and growth teams ran 3x more experiments with self-serve tools

Business Impact

Infrastructure that became the foundation

Finance team reclaimed days of manual work each month

Automated provider payout calculations eliminated a recurring bottleneck for the entire finance team.

Product decisions accelerated

Teams no longer waited for data requests. Self-serve tools put the answers directly in the hands of decision-makers.

Consistent data definitions eliminated confusion

Standardised taxonomy meant every team was working from the same source of truth. No more conflicting numbers.

Infrastructure became the foundation for all analytics at Intellect

The flat table architecture, pipelines, and tooling became the layer everything else was built on.

Let's work together.

Looking for a data person who can go from SQL to boardroom? I'd love to chat.