The starting point for this project was a materials tracking process that existed almost entirely in one person's head, supplemented by notepads, a few spreadsheets, and phone calls. The goal was to get that data into a structured system that could support inventory tracking, forecasting, analysis, and eventually automated reorder workflows.
Here's how we approached the engineering work.
Discovery: understanding unstructured data
Before designing a schema, we needed to understand what data actually existed and in what form. We spent time shadowing the current process, and what we found was messier than expected.
The "source of truth" was a combination of: physical notepads with handwritten stock counts, updated irregularly. A shared Excel workbook with materials lists per project, maintained by one person, with no version control and formulas that had broken months ago. Verbal communication where the inventory manager carried running totals in his head. Vendor invoices and packing slips filed in binders, which were technically the most accurate record but had no digital equivalent.
There was no consistent naming convention for materials. The same product might appear as five different strings across different projects and spreadsheets. Quantities sometimes included waste factors, sometimes didn't. Units of measure were inconsistent.
Our first deliverable was a data audit: a structured inventory of what information existed, where, in what format, and how reliable it was.
Schema design
We designed the database around five core entities:
Materials represented the canonical catalog. Each record had a unique ID, standardized name, category, default unit of measure, and vendor associations. We built this by deduplicating across all sources. The initial catalog had about 340 items. We built a "suggest new material" flow for items not yet cataloged.
Projects mapped to active jobs. Each project had an estimated materials list from takeoffs and a running log of actual consumption. The delta between estimated and actual became one of our most useful analytics.
Inventory tracked quantities by location using a ledger model. Every change was recorded as a transaction (received, consumed, transferred, adjusted), and the current quantity was derived from the sum. This gave us a complete audit trail.
Purchase Orders tracked orders with line items linked to the catalog, a status workflow (draft, submitted, acknowledged, shipped, received, closed), and project associations. We handled split shipments with partial receipt against line items.
Vendors held supplier information, lead times, and pricing history stored per material with effective dates.
Building input tools for field use
The field team was on job sites, in trucks, and in a warehouse with inconsistent WiFi. Any input tool had to meet these constraints:
Mobile-first. We built a progressive web app to avoid app store friction. Large tap targets, minimal typing, smart defaults.
Offline-capable. Service worker with IndexedDB cache. Data synced when connectivity returned. Conflict resolution was simple: last-write-wins at the transaction level, which worked because inventory transactions are append-only.
Minimal friction. Logging material usage needed under 30 seconds: select project (defaulted to assigned), scan or search material, enter quantity, confirm. Four taps and one number.
Barcode scanning via device camera. We generated EAN-13 barcode labels for every cataloged product, organized by category and printed on standard label sheets. The field team could scan a bin label instead of searching by name.
We tested with three field team members for a week before broad rollout. Daily feedback, daily changes.
Analytics layer
For operations: burn rate calculations comparing consumption against estimates. Reorder point alerts generating pre-populated PO drafts. Lead time tracking using historical PO data.
For the owner: spend analysis by project, vendor, material category, and time period. Estimate accuracy reporting comparing takeoff estimates against actuals on completed projects. Waste metrics comparing received against consumed quantities.
Built as SQL views and materialized views in PostgreSQL, exposed through an API to a dashboard frontend.
Migration strategy
The initial load didn't need to be perfect. It needed to be good enough to start from. We imported from spreadsheets, manually entered current stock from a physical inventory count, and accepted that historical transaction data didn't exist in usable form.
We marked the migration date as the start of reliable data. The physical inventory count was critical for establishing a known-good baseline.
Validating the system
We tracked adoption metrics (daily active users, transactions per day), data quality metrics (reconciliation discrepancy rates, unknown material entries), and outcome metrics (rush order frequency, materials variance, time saved in weekly purchasing).
After three months, rush orders had dropped. Materials variance on new projects was tighter. The weekly purchasing process was down to about an hour of reviewing system-generated PO drafts.
The notepad guy shifted from single point of failure to managing vendor relationships and training new hires. His knowledge was still valuable. It was just no longer a bottleneck.
Lessons
The hardest part isn't the database or the API. It's the human element. Minimize friction: make input tools simple, make the system forgiving (undo, edit, soft deletes), make benefits visible quickly.
Start with data capture before analytics. A dashboard without reliable data creates false confidence.
And do a physical inventory count. Starting with guessed numbers means every report is suspect.