🎨 A Day in the Life

Procurement AI Engineer

See how the products I build solve real pains from 9 AM to 5 PM

📊 Billions Analyzed🌍 50+ Engagements Open to Work
The Problem

Messy Input Reality

Client sent 50 exports with different schemas. I have 48 hours to build a spend cube.

If this isn’t standardized + replayable, every downstream analysis becomes a one-off fire drill.
Click to Solve
System Design

Self-Serve Workbench + Execution Platform

System design: intake → orchestration → execution → artifacts.

Surfaces
Workbench UIAPI
Stack
Vue.jsFlask APIAzure Queues/BlobDatabricks JobsPostgres
PathUpload -> Validate -> Queue -> Execute -> Artifact Delivery
What they do
  • Select a workflow template; upload raw extracts; map columns once.
  • Run jobs self-serve; monitor progress; rerun safely with parameter tweaks.
What they get
  • A standardized spend cube foundation with deterministic outputs.
  • A repeatable run record (what ran, when, and what files came out).
Under the hood
  • Queue-backed orchestration dispatching Databricks workloads with capacity-aware scheduling.
  • Blob-first artifact contracts + progress tracking so non-technical teams can trust the runner.
Cuts “please run this notebook” bottlenecks and reduces time-to-first-usable-output.
Turns consulting workflows into reusable services (not scripts).

Decision Memos

Snapshots of architectural judgment.

ARCH_DECISION2025-10-12

Orchestrator vs Use-Case

Why we abandoned the 'One Grand Planner' for specific tool-harnesses.

GAP_ANALYSIS2025-11-05

Enterprise GPTs

The missing reliability layer in stock GPT builders.