Read time: 7–8 min
Early-stage founders usually make one of two mistakes with metrics. They measure nothing — operating on gut feel until something breaks badly enough to notice. Or they track everything — building dashboards full of numbers that look impressive in a board slide but do not tell them whether the business is actually working.
The right approach is narrower. Find the one number that tells you whether your model is working. Move that number. Add complexity only when that number stops being sufficient.
This sounds obvious. It is harder than it sounds.
The most powerful metric is the one that reflects the real engine of your business — not the one that is easiest to measure or the one that impresses the most people in a presentation.
Before any metric makes sense, there is a prior question: do you actually have a sales engine? Peng T. Ong, Managing Partner at Monk's Hill Ventures in Singapore, describes startup growth as a two-phase process.
Phase 1 is the Sales Engine Construction Phase — figuring out the profitable, repeatable, and scalable way to acquire and retain customers.
Phase 2 is the Crank-Up Phase — deploying capital into an engine that is already working.
The failure mode that kills most startups is collapsing these phases together. Founders raise capital and start tracking growth metrics before they have proven the engine actually works. Money flows into paid acquisition. The metrics move. But the economics are broken. When the money runs out, the model is still unproven. This is not bad luck — it is a sequencing error.
Metrics matter most in Phase 2.
In Phase 1, the right questions are: can I acquire a customer profitably? Can I do it repeatedly? Can I do it in a way that scales? Once the answer to all three is yes, the metrics you build around that engine will actually tell you something useful.
A vanity metric is a number that moves in the right direction without telling you whether the business is actually working. Registered users is a vanity metric if most of them never returned after signup. Total downloads is a vanity metric if none is converted to active use. Social media followers is a vanity metric if they do not buy.
Vanity metrics are seductive because they tend to go up and to the right, which feels like progress. The discipline is to ask, for every number you track: does this number moving upward actually mean that more value is being created and captured? If the honest answer is no, it is a vanity metric. Stop tracking it as a primary metric.
Before growth metrics make sense, unit economics have to be sound. Unit economics answer a single question: for each additional unit of business we generate, are we creating or destroying value?
For a product company, the unit is often a single product sold. For a SaaS company, it is a single seat or subscription. For a marketplace, it is a single transaction. The unit economics framework asks: what does it cost to produce or deliver this unit? What revenue does it generate? What is the gross margin?
A business with healthy unit economics can scale. A business with broken unit economics will destroy value faster as it grows. This is why investors ask about unit economics before almost anything else. Scaling a business with negative unit economics — where each additional customer costs more than they generate — is the fastest way to run out of cash.
CAC is how much it costs, on average, to acquire one paying customer. Include all marketing and sales costs: paid advertising, content creation, sales team salaries, event costs, referral fees. Divide by the number of new customers acquired in the same period.
A common mistake is calculating CAC only from direct ad spend and ignoring the time cost of the sales team. If your sales cycle requires three founder-hours per deal and you value your time at anything, those hours belong in CAC.
Real example: an early-stage B2B SaaS company might spend $3,000 per month on marketing and close four new clients per month. Naive CAC: $750. But if each deal requires ten hours of founder time — and that time has opportunity cost — the real CAC is higher. Understanding that full number is what allows you to make honest decisions about which channels and processes to invest in.
LTV is how much revenue a customer generates over the duration of their relationship with your business. For a subscription business, this is average monthly revenue times average customer lifespan. For a transaction business, it is average order value times average purchase frequency times average lifespan.
The basic rule that has stood the test of time across most business models: LTV must be at least 3x CAC for a business to be viable at scale. If acquiring a customer costs more than they will ever pay you, you are subsidising customers with investor or founder capital — not building a business.
Real example: Agoda, the travel booking platform, invested significantly in developing a composite KPI that reflected LTV from multiple customer segments across dozens of markets. The complexity was justified because the simple LTV calculation did not capture the full value of a customer who books hotels and activities across multiple trips over several years. For Agoda's business, understanding that full lifetime value — and how to grow it — was the key to profitable scaling.
Daily and Monthly Active Users measure real engagement — not registration, not one-time visits, but consistent return behaviour. The ratio between them (DAU/MAU) tells you something important about the stickiness of your product.
A DAU/MAU ratio of 0.5 means half your monthly users engage daily — strong stickiness, typical of social or communication products. A ratio of 0.05 means most users barely return after their first interaction — a signal that the product is not creating enough habit or value to drive consistent return.
For products where retention is the business model — subscriptions, platforms, community tools — this ratio is foundational. A product with high acquisition and low DAU/MAU ratio is filling a leaky bucket. You are spending to bring users in and they are leaving through the bottom.
Growth hacking, a term originally coined by Sean Ellis, refers to rapid experimentation across marketing, product, and operations to find the most efficient growth mechanisms. The 'hacking' is not about shortcuts. It is about finding the non-obvious, often channel-specific mechanisms that compound — rather than linear growth from paid acquisition.
A/B testing is the core method. You have two versions of something — a pricing page, an email subject line, an onboarding flow, a call-to-action button. You show version A to half your audience and version B to the other half. You measure which performs better. You keep the winner and run the next experiment.
When I was at Product Madness — the mobile games studio behind Scatter Slots and Heart of Vegas — we were running at least 20 growth experiments simultaneously at any given time. Not because we were scattered. Because we knew that the experiments that worked were the ones that drove everything. The ones that failed taught us something. The compound effect of getting this right was record-breaking revenue. The more we ran, the better we got at guessing which experiments would work before we ran them.
If you have a science background, growth hacking will feel familiar. It is hypothesis generation, experimental design, result measurement, and iteration — applied to business. The scientific method, running at startup speed.
Agoda's documented process of developing a single comprehensive KPI captures a principle that applies at any scale: the most powerful metric is one that captures the combined effect of the most significant drivers of your business in a single trackable number.
For a subscription business, that might be Net Revenue Retention — the change in revenue from existing customers accounting for expansion, contraction, and churn. If NRR is above 100%, you are growing from your existing customer base alone, even with no new customer acquisition. That is an exceptional position.
For a platform, it might be a composite of supply quality, demand volume, and conversion rate. For a marketplace, it might be gross merchandise value per active user. The right choice depends entirely on what actually drives the business.
For a GVP-stage venture, the question is simpler: what is the one number that, if it is moving in the right direction, tells you the business is working? For most early-stage ventures, that number is weekly active customers or weekly revenue. Start there. Add complexity only when the simple number stops being sufficient.
Dropbox offered additional storage for referrals — built directly into the product. Each user became a distribution channel. The product funded its own acquisition.
Airbnb built a tool that allowed hosts to cross-post listings to Craigslist — a platform with vastly more traffic than Airbnb had at the time. They piggybacked on existing distribution infrastructure rather than building their own.
Hotmail placed a single line at the bottom of every email sent through the platform: 'Get your free email at Hotmail.' Every email was an advertisement. Every user was a salesperson. The cost was zero.
The pattern across all three: they found the specific mechanism inside the product or channel that creates compounding, not linear, growth. The product spread itself. At early stage, finding that mechanism is worth more time than almost any other question.
A Note for GVP Students
Block F asks for one key metric you will track from Day 1 and why. The 'why' is the important part.
Most teams pick DAU or revenue because those are the metrics they have heard of. The stronger answer comes from working backwards from your business model. If you are a subscription platform, the metric that matters most is probably retention or NRR. If you are a marketplace, it might be GMV per active buyer. If you are a B2B SaaS, it might be expansion revenue as a percentage of total revenue.
One more thing worth keeping in mind: metrics only tell you something useful if your sales engine is already working — if you can acquire customers profitably and repeatedly. Before that point, what you are measuring is the search, not the business. The Monk's Hill Sales Engine article (linked in your Block F resources) explains this distinction well and is worth reading before you decide which number to track.
For those of you with science backgrounds: think of every growth experiment as a hypothesis. Write it down before you run it. Define what 'working' looks like before you see the results. Measure cleanly. Record what you learned. That discipline is what separates teams that learn fast from teams that run experiments but remain confused.