Strategic Growth Hacking glossary

Alphabetical definitions of key terms used across Strategic Growth Hacking pages.

Last updated: December 12, 2025

Definition: A customer lifecycle model (also called “pirate metrics”): Awareness, Acquisition, Activation, Revenue, Retention, Referral. Used to find bottlenecks and choose what to improve next.

Why it matters: Without a shared lifecycle model, teams often optimize local metrics (e.g. clicks) while the real constraint sits elsewhere (e.g. activation or retention). AAARRR makes bottlenecks visible and keeps prioritization honest.

Acquisition

Open page →

Definition: The “getting users/customers” stage. In AAARRR, it’s the step where people first start using or signing up.

Why it matters: If acquisition is the bottleneck, improving conversion rates upstream can have immediate impact. If it is not the bottleneck, more traffic can create noise rather than growth.

Acquisition Profitability

Open page →

Definition: A way to understand whether your acquisition investment is economically sound (what you get back vs. what you spend).

Why it matters: It prevents “growth” from turning into performance marketing theater. If acquisition isn’t profitable (or trending toward it), scaling acquisition compounds losses.

Activation

Open page →

Definition: The moment a user experiences the core value (“aha moment”). In AAARRR, it’s the step where the user becomes meaningfully active.

Why it matters: If activation is weak, acquisition improvements often fail to translate into durable growth. Activation is one of the highest-leverage stages in many products.

Definition: The first phase of the four-phase process: analyze the current situation and/or learnings from the previous cycle before forming hypotheses.

Why it matters: Without analysis, experimentation becomes random activity. With it, hypotheses are grounded and prioritization becomes tractable.

Blended CAC

Open page →

Definition: Customer acquisition cost calculated in a blended way (not per-channel attribution), often used when attribution is incomplete.

Why it matters: It keeps decision-making grounded in economics even when channel attribution breaks down.

Bottleneck

Open page →

Definition: The biggest constraint limiting growth right now. The growth team finds the bottleneck, solves it using the process, and then moves on to the next one.

Why it matters: Bottleneck thinking prevents scattered prioritization and turns growth work into sequential problem solving.

CAC (Customer Acquisition Cost)

Open page →

Definition: The total cost to acquire a customer. For more accurate CAC, include costs like salaries, tools, outsourcing, and other acquisition-related overhead.

Why it matters: CAC determines whether growth compounds profitably or compounds losses.

Combined Conversions

Open page →

Definition: A combined view of conversions (often from your CRM) used as a reliable signal when channel-level attribution is noisy.

Why it matters: When attribution is broken, combined conversions can still be stable enough to guide weekly iteration and OKR tracking.

Conversion

Open page →

Definition: A defined action that indicates progress (e.g. lead, signup, purchase). Conversions must have consistent definitions across teams to avoid confusion.

Why it matters: If “lead” means different things to marketing and sales, the organization can look like it is growing while revenue stalls.

Definition: Customer relationship management system. Often the source of truth for leads and conversions (and useful for “combined conversions”).

Why it matters: When analytics attribution is unreliable, CRM-based outcomes keep growth work tied to business reality.

CTR (Click-Through Rate)

Open page →

Definition: The percentage of people who click after seeing something (e.g. an ad or message). Useful for evaluating acquisition experiments.

Why it matters: CTR helps you understand whether messaging resonates, but only matters insofar as it improves qualified conversions or revenue.

Dashboard (Growth Hacking Dashboard)

Open page →

Definition: A spreadsheet-based dashboard used as a single place to collect growth-related information and support decision-making.

Why it matters: A single source of truth reduces coordination overhead and makes the testing cycle visible across functions.

Experiment

Open page →

Definition: A test designed to create learning and move a KPI. An “optimal experiment” is tied to OKRs, measurable, learning-focused, and executable within a short cycle (e.g. a week).

Why it matters: Experiments create compounding knowledge: even failed tests improve future prioritization and expose execution bottlenecks.

Four-phase process

Open page →

Definition: The core operating cycle: Analysis → Hypothesis → Prioritization → Testing. Repeat each cycle consistently.

Why it matters: It prevents “random acts of growth” by enforcing a consistent loop that produces learnings and momentum.

Definition: EU privacy regulation that impacts tracking and attribution. In practice, it often makes channel-level attribution less reliable.

Why it matters: If you depend on perfect attribution, your decision system becomes fragile. GDPR pushes teams toward more robust measurement practices.

Growth lead

Open page →

Definition: A person who centralizes and drives the growth process (e.g. monitoring OKR deployment, maintaining the cadence, and keeping the team aligned).

Why it matters: Without a clear owner of the process, cycles slip, learnings vanish, and growth work becomes ad hoc.

Hypothesis

Open page →

Definition: A specific, testable statement about what change could improve a KPI. In the process, you typically form 1–3 hypotheses per cycle.

Why it matters: Hypotheses prevent “busy work” and force clarity about what you expect to happen and why.

ICP (Ideal Customer Profile)

Open page →

Definition: A clear description of the customer type you’re best positioned to serve. ICP is iterated continuously as you learn.

Why it matters: A clear ICP reduces wasted acquisition, improves conversion quality, and sharpens messaging and prioritization.

KPI (Key Performance Indicator)

Open page →

Definition: A measurable number used to guide decisions and evaluate progress. The first step in the process is choosing a KPI that can be measured periodically.

Why it matters: Without a KPI, you cannot evaluate experiments, prioritize rationally, or compound learning.

KPI Definitions

Open page →

Definition: A shared set of definitions for metrics like “lead” so different teams (e.g. marketing and sales) mean the same thing when they talk about numbers.

Why it matters: If teams use different definitions, you can hit targets without moving the business. Definitions make performance discussable and actionable.

Learning experience

Open page →

Definition: A mindset where outcomes (including failures or execution issues) are treated as learning that improves future decisions and process quality.

Why it matters: Learning compounds. Treating failures as learning increases the rate of iteration and reduces fear-driven stagnation.

Definition: The authority to make decisions needed to remove bottlenecks (e.g. buying tools or outsourcing). Lack of mandate is a common pitfall.

Why it matters: Many growth bottlenecks are organizational (skills, time, ownership). Mandate enables solutions.

North Star

Open page →

Definition: A guiding top-level goal that aligns the team’s work (often revenue in cross-functional growth). In this content, it’s expressed as a “Strategic Goal” and its sub-goals.

Why it matters: A clear north star reduces wasted effort and makes trade-offs explicit.

Objective (OKR)

Open page →

Definition: In OKRs, the Objective is the qualitative goal you want to achieve. It’s supported by measurable Key Results.

Why it matters: Objectives create shared direction; Key Results create measurability.

OKR (Objectives and Key Results)

Open page →

Definition: A goal-setting method that connects an Objective (what you want) to Key Results (how you measure progress). Used to steer experiments toward meaningful outcomes.

Why it matters: OKRs prevent growth work from drifting into unconnected tactics. They also make progress (or lack of it) visible.

Definition: The accountable person for a metric, area, task, or experiment. Clear ownership prevents work from stalling.

Why it matters: Without owners, tasks and experiments become “everyone’s job”, which usually means no one does them.

Pipeline Tracking

Open page →

Definition: A shared view of the sales pipeline used to analyze pipeline efficiency and connect marketing and sales performance.

Why it matters: It prevents “lead volume” from hiding poor downstream conversion or process bottlenecks.

Prioritization

Open page →

Definition: The phase where you select what to test next based on likelihood of success, resource intensity (time/money), and scalability.

Why it matters: Great ideas are abundant; time and attention are scarce. Prioritization protects the cadence and learning rate.

Definition: The stage where existing users bring in new users (word-of-mouth, invites, sharing). In AAARRR, it’s the last step.

Why it matters: Referral can create compounding, lower-CAC growth—but usually only after earlier stages are working.

Retention

Open page →

Definition: How well users/customers keep coming back. Strong retention compounds growth and is often a better lever than more acquisition.

Why it matters: Poor retention turns acquisition into a leaky bucket. Great retention makes almost every other growth effort more effective.

Revenue (AAARRR)

Open page →

Definition: The monetization or conversion event in the lifecycle. In AAARRR, it’s the point where value is captured (purchase, subscription, etc.).

Why it matters: Revenue is the most direct growth outcome. But improving it often requires upstream work (activation, retention, pipeline efficiency).

Strategic Goal

Open page →

Definition: A top-level goal that serves as the “north star” for experiments and tasks. Sub-goals break it into actionable areas.

Why it matters: It aligns cross-functional work and ensures experiments are not isolated tactics.

Systematic

Open page →

Definition: Operations turned into repeatable routines that can be executed consistently and improved over time.

Why it matters: Systematic work compounds: each cycle improves execution quality and learning speed.

Definition: The focus on running tests where outcomes aren’t known upfront. Testing is executed as part of a repeatable cycle and documented thoroughly.

Why it matters: Testing turns uncertainty into learning, which is the core engine of improvement.

Testing cycle

Open page →

Definition: A recurring cadence (often weekly or biweekly) where you run the four-phase process end-to-end and document learnings.

Why it matters: Without a cadence, growth work becomes reactive and learnings don’t accumulate.

Vanity metrics

Open page →

Definition: Metrics that look good but don’t reflect real business progress (e.g. impressions without conversions). A common red flag is chasing them.

Why it matters: Vanity metrics can create false confidence and lead to scaling the wrong things.