Case Study Platform Audit Methodology
Section 01

Why we published this

When a small-business owner hears "platform audit," the reaction is almost always the same: great, but what do I actually get? A PDF? A call? A spreadsheet of bugs? Something that lives in a Notion page somewhere and dies there? Buyers are right to be skeptical -- the phrase gets used to mean everything from a 30-minute Zoom to a six-figure security engagement.

So rather than write another services page, we decided to publish one. A complete Platform Audit, performed on a real codebase, with every single deliverable linked directly from this article. What you're reading is the walkthrough -- the subject is our own Ad Display demo system, the reports are the actual HTML we'd hand a client, and the findings are unvarnished.

Who this is for. You built something -- with a developer, an agency, a freelancer, or increasingly with AI tools like Cursor, v0, or Bolt. It works. Customers use it. But you have no idea whether it's a solid foundation or a pile of sticks waiting for the first real load. This case study shows you exactly what a Platform Audit would tell you about it.
Section 02

What a Platform Audit is, in plain English

A Platform Audit is a structured, third-party review of a software system -- the code, the architecture, the security posture, the performance, and its readiness to become a real product people pay for. We read every file, run the tools, map the dependencies, and then translate everything into language a non-technical owner can actually use to make decisions.

It is not a penetration test. It is not a line-by-line code review. It is not a rewrite quote. It is the report you wish you had before you signed your last development contract.

Glossary (the only jargon you need):
  • Platform -- the running software system as a whole: front-end, back-end, database, infrastructure, and the glue between them.
  • Audit -- a systematic, evidence-based review against a fixed rubric, producing findings and recommendations.
  • SaaS readiness -- how close the system is to being a multi-tenant product you can sell as a subscription, rather than a one-off build for a single customer.
  • Technical debt -- shortcuts taken during development that will cost time or money to fix later. Some is fine. A lot is a problem.
  • Attack surface -- every place a hostile actor could try to poke at your system. Smaller is better.
Section 03

Meet the subject: the Ad Display system

For this case study we audited our own Ad Display demo -- a realistic, medium-sized Next.js application that simulates a digital signage platform: campaigns, creatives, schedules, screens, analytics. It is a believable stand-in for the kind of system small and mid-sized businesses bring to us every month. The full source is on GitHub, and we audited a fixed commit (f6d21ca) so anyone can reproduce the findings.

13,700
Lines of code
75
Source files
19
Page routes

Behind those numbers: roughly 50 API endpoints, 33 third-party dependencies, and -- pointedly -- zero automated tests. That "zero tests" number is not a gotcha; it is exactly the kind of thing the audit is designed to surface and contextualize.

Section 04

Our methodology in plain language

Every Dovito engagement runs through three phases: Discover, Validate, Transform. An audit is almost entirely Discover and Validate, with a small Transform deliverable at the end (the prioritized action plan). Inside those phases, we execute a fixed eight-phase checklist so nothing important gets skipped:

  1. Repository intake -- clone, inventory, fix the commit SHA so findings are reproducible.
  2. Architecture mapping -- identify the stack, the boundaries, the data model, the runtime.
  3. Static code quality -- type safety, linting, dead code, duplication, structural smells.
  4. Security review -- known vulnerabilities, configuration, authentication, headers, secrets.
  5. Performance and reliability -- bundle size, render cost, database patterns, error handling.
  6. SaaS readiness -- seven-dimension rubric scoring the gap to product-grade.
  7. Cost modeling -- what replacement would cost, what the demo-to-launch delta looks like.
  8. Reporting and walkthrough -- the HTML deliverables, the call, the prioritized action plan.

What clients usually get

  • A 40-slide PDF no one reads
  • Bullet points without severity or cost
  • Jargon that hides the recommendation
  • No reproducible baseline
  • A generic "needs more tests" verdict

What we deliver

  • Standalone HTML reports, shareable
  • Every finding rated, costed, and prioritized
  • Plain-English summary, technical appendix
  • Fixed commit SHA anyone can reproduce
  • A two-quarter action plan with dollar bands
Section 05

What we found

Here is the framing we use throughout the main report, because it is true for almost every codebase we audit: intentional here, critical in production. A demo app deliberately ships with shortcuts -- they are defensible choices for the context. The audit's job is to name each one honestly so that if this system ever gets promoted from demo to product, nothing quietly carries over.

The Ad Display system earns strong marks where it can. Type safety lands at B- -- strict TypeScript is on, but a sweep found roughly 25 any casts in application code; the first audit pass overclaimed zero. Worth fixing, but not structural. Architecture is an A- -- clean separation between the mock data layer and the UI, exactly the pattern you want in a demo. Those are the A's. Then come the structural low grades: tests are an F (there are none), error handling is a D (optimistic paths everywhere), and the overall grade lands at C+.

FindingSeverityEffortWhy it matters
Source maps shipped in production bundleHigh15 minReveals full TypeScript source to anyone with DevTools. A one-line Next.js config change.
No CSP or security headers on GitHub Pages hostHigh15 minHost-level -- intentional on GH Pages, but a real deployment must ship CSP, HSTS, X-Frame-Options day one.
External links missing rel="noopener"Medium~30 minClassic tabnabbing vector. Trivial sweep of the codebase.
Zero automated testsMedium2-4 weeksAcceptable for a demo. Non-negotiable for a platform handling customer data.
Generic error boundaries with swallowed stack tracesMedium1 weekUsers see "something went wrong," operators learn nothing. Must resolve before paying customers land.
Mock data layer presented as real persistenceInfoN/AIntentional and clearly isolated -- flagged only so a future engineer cannot mistake it for a database.
The fix arc. This case study is the second pass. Between the first audit run and now, ten commits closed eight of the original findings -- React error #185, a banner z-index regression, a Stripe URL leftover from the production fork, a silent useQuery failure, basePath-less image paths, and more. We re-ran the same methodology on the updated commit, corrected our own earlier claim that application code had zero any casts (it has ~25), and shipped the refreshed reports below. That "audit, fix, re-audit in one day" loop is what the service is actually buying.

The full report counts 17 findings total: 0 Critical, 2 High, 5 Medium, 4 Low, and 6 Info. Notice the two High findings are both the 15-minute fixes -- we call those free wins, and they are the first page of the action plan.

Section 06

The deliverables

This is the part buyers most want to see and almost never get to before signing. Two standalone HTML reports, shareable as links -- send them to a CTO, a board member, an acquirer -- and they render the same way everywhere, on any device, without a login. Start by exploring the demo app itself, then read what we found.

Platform Audit Report

8 sections: Executive Summary, Architecture Map, Code Quality, Security Findings, Performance, Error Handling and Reliability, Cost Analysis, and the Prioritized Action Plan. Written twice: a plain-English summary at the top of each section, and a technical appendix below it. Read the full Platform Audit report.

SaaS Readiness Report

A seven-dimension scoring rubric -- multi-tenancy, user management, billing, customization, public API, onboarding, and monitoring -- plus a written gap analysis for every dimension and an estimated cost band to close each gap. This is the document that tells you whether "turn it into a SaaS" is a month of work or a year. Read the full SaaS Readiness assessment.

The headline SaaS Readiness number for this system is 10% overall, which sounds devastating until you read the breakdown: multi-tenancy 0%, user management 5%, billing 0%, customization 30%, API 20%, onboarding 15%, monitoring 0%. It is a demo -- of course billing is zero. The value of the report isn't the score, it's the gap analysis: for each dimension, here is exactly what's missing, here is a credible cost band to build it, and here is the sequence we'd recommend.

Section 07

What a client actually gets

Stripped of ceremony, every Platform Audit engagement delivers four concrete things:

  • Three standalone HTML reports -- the Platform Audit, the SaaS Readiness, and a landing page that indexes both.
  • A live walkthrough call -- 60 to 90 minutes, the whole report read through with you and whoever you want in the room, with time for every question.
  • A prioritized action plan -- findings ordered by ratio of impact to effort, not by severity alone. The free wins go first.
  • A replacement-cost estimate -- a defensible dollar band for rebuilding the system from scratch at market rates, plus the demo-to-launch delta.

For the Ad Display system specifically, the cost side of the report came out like this. Replacement cost (rebuilding what exists today) lands at $60K-$180K, with the mid-rate estimate around $100K -- roughly 900 hours at $125/hour. The delta to take it from demo to a real multi-tenant SaaS adds another $150K-$300K. End-to-end, demo-to-launch sits at $250K-$400K.

For context, we ran the same audit on a real client production application three days earlier, and the numbers there came in at $188K-$414K. A demo on GitHub Pages isn't meant to match a production system -- but the fact that the methodology produces sane, comparable numbers on both is exactly the point.

The goal of a Platform Audit isn't to make you feel good or bad about your system. It's to hand you a defensible, reproducible number and a sequenced plan, so the next conversation with a developer or investor starts from facts instead of vibes. -- Matthew Coleman, Dovito
Section 08

For the technically curious

If you are the technical reviewer rather than the owner, here is the scaffolding underneath the report. Static analysis runs against strict TypeScript with no relaxed flags. Dependency security comes from npm audit output reconciled against the GitHub Advisory Database, then manually triaged. Code review is manual, file-by-file, against a fixed quality rubric covering cohesion, coupling, error handling, observability, and test surface. Performance categories are aligned to the Lighthouse taxonomy, though we don't rely on Lighthouse scoring alone -- we read the bundle and the render paths. SaaS readiness is scored against our own seven-dimension rubric, refined across dozens of engagements. Every finding is traceable to a file path and line number, and the entire audit is reproducible against the commit SHA.

Section 09

How to get your own audit

If you made it this far, you already know what a Platform Audit looks like -- because you just read one. The fastest way to find out whether your system is a candidate is to book an Operations Review. It is a free 45-minute conversation where we look at the shape of the project, tell you honestly whether an audit is the right next step or whether you'd be better served by something lighter, and quote you a fixed price if it is.

Next step: Book a free Operations Review at our Platform Audit service page, or reply to any email we've sent you. Bring the repository URL if you have it, or just a description of what you built and who uses it. We'll take it from there.