Hiring Your First Media Buyer: Interview Questions, Comp Benchmarks & 90-Day Ramp Plans

Hiring Your First Media Buyer: Interview Questions, Comp Benchmarks & 90-Day Ramp Plans

A recruitment guide for lead generation operators making their first media buying hire. Covers sourcing, interview methodology, portfolio evaluation, compensation design, and structured onboarding—the decisions that determine whether the hire accelerates growth or stalls it.


The First Hire Problem

Lead generation operators who have proven a model often face the same wall: campaigns are profitable, spend is growing, but there are not enough hours to manage everything properly. The solution seems obvious—hire a media buyer. The execution is where most operators get it wrong.

The first media buying hire is categorically different from subsequent hires. There is no internal expert to benchmark candidates against. No institutional playbook that makes a mediocre hire survivable. No peer group for the new hire to learn from. If the first media buyer underperforms, the entire operation underperforms with them.

The stakes are specific. A $50,000 monthly spend operation with a buyer optimizing at 80% efficiency leaves roughly $10,000 per month in lost revenue relative to full optimization. That is $120,000 annually—before accounting for quality degradation and buyer relationship strain that compounds over months.

This guide focuses narrowly on the first hire: where to find candidates, how to evaluate them before extending an offer, how to structure compensation that drives performance without incentivizing gaming, and how to build a 90-day ramp plan that accelerates time-to-productivity rather than repeating the expensive improvised onboarding that most operators default to.


When the First Hire Is Justified

Before sourcing candidates, verify the economics support the hire. Premature hiring creates expensive overhead before campaigns have the volume to justify dedicated management.

The Spend Threshold

The math typically works at $25,000–$35,000 in monthly ad spend with validated unit economics. At $25,000 monthly spend with 3x ROAS, a dedicated buyer who improves efficiency by 15% generates $11,250 in incremental monthly revenue—$135,000 annually. A mid-level buyer at $80,000 base pays for themselves within eight months even at conservative improvement assumptions.

Below $20,000 monthly, a full-time hire rarely pencils out. At $15,000 monthly spend, a $70,000 salary represents nearly 40% of total spend. The math requires exceptional improvement rates that first hires rarely deliver during ramp periods. Use contractors or agencies until you cross the threshold.

The Attention Deficit Signal

Spend level alone is not sufficient. The clearer trigger is capacity constraint: optimization frequency dropping from daily to weekly, testing pipelines stalling, profitable campaigns plateauing at current spend because scaling requires monitoring you cannot provide. When campaigns are leaving money on the table because you cannot watch them closely enough, the hire is justified regardless of whether spend has hit a specific threshold.

The Alternative Use Test

The final check: what else could you be doing with the time you spend on campaign management? If the answer is developing buyer relationships, building new verticals, or working on operational infrastructure, those activities likely generate more value than campaign optimization. The hire makes sense when your alternative uses of time are worth more than a buyer’s fully-loaded cost.


Where to Find Candidates

The sourcing strategy determines the quality ceiling of candidates who even reach the interview stage. Posting on generic job boards and waiting produces mediocre candidate pools. Active sourcing from specific channels produces different results.

Platform-Specific Job Boards

LinkedIn Jobs works best for mid-level and senior candidates. Filter by job titles including “paid media manager,” “performance marketing manager,” “media buyer,” and “growth marketer.” Set alerts for candidates who recently left agency roles—agency alumni seeking client-side positions often represent a strong value segment: trained to move fast, experienced with multiple accounts, and motivated by the ownership that client-side roles provide.

WeWorkRemotely and Remote.co attract candidates who prefer remote-first positions. These candidates often accept below-metro compensation in exchange for location flexibility, which expands access at competitive cost. Wellfound (formerly AngelList) reaches startup-oriented candidates accustomed to ambiguity and broader responsibilities than agency roles require.

Indeed and ZipRecruiter produce high application volume but lower signal-to-noise ratios. Use them for junior roles where volume matters more than precision, or when speed is critical and you can invest time in screening.

Performance Marketing Communities

The highest-quality sourcing often happens in communities where practitioners discuss tactics. Facebook groups focused on paid media—Foxwell Founders, Ad Buyers Community, Meta Ads Mastermind—contain working buyers who are identifiable through their posts and questions. A direct message to an active, thoughtful contributor in these communities often reaches a candidate who would never respond to a job posting.

Performance marketing Slack communities and Discord servers serve a similar function. Communities built around specific platforms (Google Ads focused groups) or specific verticals (insurance marketing, mortgage lead gen) provide pre-filtered candidate pools already familiar with your world.

LeadsCon and Affiliate Summit attendee networks contain operators who understand lead generation economics specifically—the delayed feedback loops, buyer return rates, and quality/CPL trade-offs that confuse candidates from pure e-commerce backgrounds. Connections from these events are worth nurturing even when you are not actively hiring.

Agency Alumni Networks

Media buying agencies are training programs that eventually release candidates into the market. Junior buyers who spend two to three years at an agency learning account management, creative testing, and platform mechanics often reach a ceiling where client-side roles offer more growth, ownership, and compensation.

These candidates carry advantages: they have handled multiple client accounts simultaneously (which develops pattern recognition), they know how to move quickly under pressure, and they understand that optimization decisions have real business consequences. The trade-off is agency-specific behaviors to unlearn—prioritizing deliverables over outcomes, waiting for direction rather than self-initiating, treating reporting as an end rather than a means.

Agency alumni sourcing works best through direct contact with agencies in adjacent verticals. A media buyer at a performance marketing agency specializing in financial services has transferable knowledge to insurance or mortgage lead generation. Reach out directly to agency owners asking whether any strong buyers are looking for client-side opportunities.

Vertical-Adjacent Companies

Candidates who already work in lead generation—whether at competitors, adjacent verticals, or complementary businesses—require the shortest ramp period. They understand buyer economics, return rate dynamics, and the compliance constraints that shape what campaigns can say and target.

Direct outreach to media buyers at companies in your space is the most targeted approach. Identify candidates through LinkedIn by searching for media buying roles at specific companies, then reach out with context about your operation and opportunity. This requires delicacy with direct competitors, particularly if you share buyer relationships. It is more viable with companies in adjacent verticals where your operations do not directly overlap.


The Interview Framework

The goal of the interview process is to distinguish candidates who understand media buying conceptually from those who can execute it in your specific environment. This requires going beyond credential review and platform name-dropping into structured evaluation of how candidates actually think and work.

Stage 1: Phone Screen (20 minutes)

The phone screen filters for baseline qualifications and communication ability. Candidates who cannot explain their work clearly to a non-expert will struggle to communicate campaign decisions to stakeholders.

Core questions for the phone screen:

What platforms have you managed spend on, and what was the largest monthly budget you personally controlled—not your team, but you specifically?

This distinguishes candidates who managed at scale from those who were adjacent to scale. A candidate who “managed $5 million monthly” in a team of eight buyers may have personally controlled $100,000. The difference matters for your operation.

Walk me through a campaign that significantly underperformed and what you did about it.

The answer reveals analytical capability and learning orientation. Strong candidates have a specific campaign in mind, can explain the signals they noticed and when, describe the diagnostic process, and articulate what they learned. Weak candidates generalize, blame external factors without self-reflection, or describe campaigns that underperformed slightly rather than significantly.

What questions do you have about our operation before we go further?

Candidates who ask no questions or ask generic questions (“What does the company culture look like?”) have not engaged seriously with the opportunity. Strong candidates ask about verticals, buyer relationships, compliance environment, return rate expectations, or current platform mix. Their questions reveal whether they understand lead generation specifically.

Stage 2: Technical Assessment (60-90 minutes)

The technical assessment evaluates platform knowledge, analytical thinking, and lead generation economics. Use a structured case study rather than open-ended questions—it creates comparable evaluation across candidates.

Case study format: Present a sanitized dataset from your operation (or a realistic fictional equivalent) showing campaign performance over 90 days across multiple ad sets. Include CPL trends, conversion rates, creative rotation data, and downstream lead quality indicators (return rates, contact rates). Ask candidates to prepare a written analysis and recommendations before a 30-minute discussion.

What to evaluate in the analysis:

  • Do they identify the right problems, or do they fixate on surface-level CPL without connecting to quality?
  • Do their recommendations have logical prioritization, or are they a laundry list of everything they could theoretically test?
  • Do they quantify expected impact, or do they make vague directional suggestions?
  • Do they acknowledge uncertainty where it exists, or do they present speculation as fact?

Platform diagnostic questions:

These questions test depth beyond surface familiarity.

“CPL on your best Facebook campaign doubled over three weeks. Walk me through your diagnostic process step by step.”

A strong answer works through a systematic checklist: creative fatigue (checking frequency, CTR decay by creative), audience saturation (reach curves, overlap with prior audiences), competitive pressure (auction data, impression share changes), landing page issues (conversion rate changes independent of traffic quality), tracking discrepancies (pixel firing verification, Conversions API data comparison), and external factors (seasonal demand shifts, economic conditions affecting the vertical). The order and completeness of the checklist reveals how organized their thinking is under pressure.

“You take over a Google search campaign that has been running for six months. The previous buyer made no changes in the last 60 days and performance has been flat. What do you look at first?”

Strong answers: search term reports for wasted spend and negative keyword gaps, match type analysis, Quality Score by keyword, audience layering opportunities, conversion tracking verification, competitor analysis through auction insights. This question evaluates initiative and structured auditing thinking.

“A large budget change triggers a Google campaign back into the learning phase. How do you manage the next two weeks?”

This tests knowledge of how platform machine learning reacts to disruption. Strong answers include: temporary bid adjustments to protect efficiency during learning, creative freshness to give the algorithm useful signals, reduced optimization targets during relearning, and expectations management for temporary performance degradation.

Lead generation economics questions:

“A campaign delivers leads at $28 CPL with a 3% return rate. A second campaign delivers leads at $22 CPL with a 12% return rate. Which is performing better?”

The correct analysis: at $28 CPL with 3% returns, effective cost per kept lead is $28.87. At $22 CPL with 12% returns, effective cost per kept lead is $25. The cheaper CPL campaign is actually more expensive per usable lead. Strong candidates answer this immediately. Candidates who have not thought in terms of true cost per lead struggle.

“Your buyer returns are rising from 8% to 14% over six weeks, but platform CPL looks stable. What do you investigate?”

This tests whether candidates understand that platform metrics and downstream quality can diverge. Strong answers: creative messaging misalignment with audience intent, traffic quality shifts from audience expansion, landing page changes altering pre-qualification, geographic or demographic drift in who converts, time-of-day or device patterns shifting.

Stage 3: Portfolio Evaluation (30 minutes)

Ask candidates to bring three examples of campaigns they personally managed. For each, they should be able to show the account structure, the optimization decisions they made over the campaign lifecycle, and the performance trajectory before and after their interventions.

What to look for:

Evidence of testing methodology. Does the campaign history show systematic hypothesis testing, or random changes without clear rationale? Organized buyers document what they tested and why. Disorganized buyers make changes reactively and cannot explain the logic.

Handling of performance deterioration. Every campaign eventually declines. How candidates managed deterioration—whether they noticed early, diagnosed accurately, and responded effectively—reveals more than campaigns that ran well without intervention.

Scale decisions. When did they decide to increase budget, and what signals justified it? Premature scaling is one of the most expensive mistakes in media buying. Candidates who scaled based on early-stage performance without sufficient data reveal poor judgment about statistical confidence.

Attribution sophistication. Do they understand the difference between platform-reported conversions and actual leads that convert to revenue? Candidates who optimize solely for platform-visible metrics without connecting to downstream outcomes will struggle in lead generation’s delayed-feedback environment.

The 40-Question Interview Bank

Use these questions selectively across the interview stages. Not every question belongs in every interview—choose based on role level and what prior stages have revealed.

Platform mechanics (choose 8-10):

  1. How does Facebook’s Advantage+ targeting differ from manual audience selection, and when do you use each?
  2. What signals indicate that a campaign is leaving the learning phase successfully versus stalling?
  3. How do you structure a Google campaign for a new vertical where you have no historical performance data?
  4. Explain how you would implement Conversions API and why it matters for lead generation tracking.
  5. When does broad match outperform exact match in Google search, and how do you validate this for a specific campaign?
  6. How do you approach Performance Max campaign structure when you want visibility into where spend is going?
  7. Describe your process for identifying and excluding bot traffic before return data confirms the problem.
  8. How do you set up a proper A/B test at the ad set level—what controls do you need and what variables can you test simultaneously?
  9. What does iOS 14.5 tracking loss mean practically for Facebook campaign optimization, and how do you compensate?
  10. How do you diagnose whether a CPL increase is driven by creative fatigue versus audience saturation versus platform-level competition?
  11. Walk me through how you structure naming conventions for campaigns, ad sets, and ads—and why.
  12. How do you think about budget allocation across campaigns when you have limited daily spend to work with?
  13. What does a healthy Facebook frequency look like for a lead generation campaign, and at what point does it become a problem?
  14. How do you evaluate whether a new audience test has received enough budget and time to make a valid assessment?
  15. Explain how you would approach dayparting for a lead generation campaign—what data informs the decision?

Analytics and attribution (choose 5-7):

  1. How do you reconcile the difference between platform-reported conversions and leads that appear in the CRM?
  2. Walk me through how you would build a weekly reporting template for a media buying operation—what metrics would it include?
  3. How do you calculate the effective cost per kept lead when accounting for return rates?
  4. What does “statistical significance” mean in the context of creative testing, and how do you apply it practically?
  5. How do you attribute performance to specific campaigns when leads move through a multi-touch path before converting?
  6. You notice that one traffic source has a 4% return rate and another has a 16% return rate, but both have similar CPLs. How do you investigate and respond?
  7. Describe a dashboard you built or improved significantly—what metrics did it include and what decisions did it drive?

Creative and testing (choose 4-6):

  1. How do you decide what to test next when you have multiple hypotheses competing for budget?
  2. What makes a lead generation creative effective—what elements do you prioritize and why?
  3. Describe your process for testing a new creative angle from hypothesis to conclusion.
  4. How do you evaluate creative performance for lead generation where the conversion event happens off-platform?
  5. How quickly do you typically expect a winning creative to fatigue, and what signals tell you it is time to refresh?
  6. What is your process for communicating creative test results to a design team so they can iterate effectively?

Lead generation specifics (choose 5-7):

  1. How is media buying for lead generation different from media buying for e-commerce?
  2. Explain the relationship between CPL and downstream lead quality—why can a lower CPL campaign perform worse on a true cost basis?
  3. What compliance considerations affect creative and landing page content in [your vertical]?
  4. How do you think about optimization timing when your primary quality signal (return rate) arrives 7-14 days after the lead?
  5. What is a ping-post system and how does it affect how you think about traffic quality?
  6. How would you approach quality optimization if your buyer reduced their order volume without explanation?
  7. What does a 15% return rate tell you about a campaign, and what would you do about it?

Judgment and self-management (choose 4-5):

  1. Describe a time you identified a problem in a campaign before you were asked to look at it. What tipped you off?
  2. Tell me about a media buying mistake you made—not a small one, a real one with real cost. What happened and what did you change?
  3. How do you prioritize when you have three campaigns underperforming simultaneously and limited time to address all of them?
  4. What does your daily campaign management routine look like—walk me through a typical morning?
  5. How do you stay current on platform changes, and can you give me an example of a recent change that affected how you work?

Compensation Benchmarks by Market

Compensation ranges for media buyers vary significantly by geography, experience level, and spend under management. The following benchmarks reflect 2024-2026 market conditions.

Base Salary by Geography and Experience

MarketJunior (0-2 yrs)Mid-Level (2-5 yrs)Senior (5+ yrs)
Major metro (NYC, SF, LA, Boston)$65,000–$80,000$90,000–$115,000$120,000–$160,000
Secondary markets (Chicago, Austin, Denver, Atlanta)$55,000–$70,000$75,000–$100,000$100,000–$135,000
Remote-first (national candidate pool)$58,000–$75,000$78,000–$105,000$105,000–$145,000
Smaller markets / tier-3 cities$48,000–$62,000$62,000–$85,000$85,000–$115,000

Remote hiring has compressed geographic differentials somewhat. Candidates in secondary and smaller markets increasingly know their market rate and expect compensation closer to national averages when roles are fully remote. Expect negotiation pressure toward the upper end of remote ranges for candidates with strong portfolios.

Adjustment Factors

Spend under management adjusts expectations upward. A buyer managing $500,000 monthly expects more than one managing $50,000, even at the same experience level:

Monthly Spend ManagedTypical Range Adjustment
Under $50,000Benchmark or 5-10% below
$50,000–$150,000At benchmark
$150,000–$500,00010-20% above benchmark
$500,000+20-35% above benchmark

Vertical complexity adds premium for specialized knowledge. Insurance, mortgage, and legal lead generation involve compliance constraints, delayed feedback loops, and buyer economics that differ substantially from e-commerce or direct-to-consumer. Candidates with proven experience in regulated verticals command 10-15% premiums over generalists with equivalent platform experience.

Lead generation experience specifically is scarce relative to general performance marketing experience. A buyer who has worked in lead generation—understands ping-post systems, return rate dynamics, buyer relationship management—typically commands 15-20% more than an equivalent candidate from an e-commerce background.

Total Compensation Structure

For first hires, a base-plus-performance structure works better than base-heavy packages for two reasons: it preserves cash during ramp, and it creates shared-outcome alignment once the buyer is fully operational.

A typical first-hire structure:

  • Base salary: 80-85% of target total compensation
  • Performance bonus: 15-20% of target total compensation, paid quarterly
  • Equity / long-term incentive: Optional but increasingly expected at growth-stage operations

For a mid-level buyer at $90,000 target total compensation: $75,000 base, $15,000 quarterly performance bonus (paid as $3,750 quarterly against annual targets). The quarterly payout cycle reinforces the connection between decisions and outcomes without the disconnection of annual bonuses.

When to Pay Above Market

Underpaying the first media buying hire is a false economy. An underpaid buyer at 90% engagement produces worse results than a market-rate buyer at full engagement, and the cost of a failed first hire—replacement recruiting, ramp period, and months of suboptimal campaign performance—far exceeds the salary premium of hiring correctly the first time.

Pay above market when: the candidate has direct experience in your specific vertical, they can demonstrably point to significant CPL improvements they drove personally, or they are leaving a competing operation with institutional knowledge of your buyer landscape. The premium for these attributes is usually $10,000-$20,000 annually. The return on that premium is a substantially shorter ramp and lower risk.


Portfolio Evaluation Criteria

Most media buyers present portfolios that tell the story they want you to hear. The evaluation framework below identifies what to look for beyond the narrative.

What to Request Before the Interview

Ask candidates to bring to the portfolio evaluation:

  1. Three campaigns they personally managed—not campaigns their team managed, not campaigns they contributed to, but campaigns where they made the optimization decisions
  2. For each campaign: account structure screenshot, 90-day performance chart, list of major changes made with approximate dates
  3. One example of a campaign they would do differently with hindsight—the failure they learned the most from

Candidates who cannot provide this material either lack the hands-on experience they claim or have not worked in environments that documented performance. Both are red flags.

Evaluating the Portfolio

Account structure tells you about systematic thinking. Well-organized accounts have clear campaign segmentation by objective, consistent naming conventions, logical ad set structures that enable clean testing, and obvious separation between prospecting and retargeting. Chaotic account structure reflects chaotic thinking—and chaotic thinking produces unpredictable campaign performance.

The optimization timeline reveals decision-making quality. Look at what changes were made and when. Strong buyers intervene at the right moment—not too early (before data is meaningful) and not too late (after losses have accumulated). Early intervention on bad signals with minimal data suggests impatience. Late intervention on clear underperformance suggests inattention or lack of confidence in making calls.

Test volume indicates work ethic and methodology. A buyer managing $50,000 monthly with no creative tests in 60 days is coasting. The absence of testing is the most common symptom of buyers who have settled into maintenance mode rather than active optimization. A healthy operation at that spend level runs five to ten meaningful tests monthly.

Improvement trajectories matter more than absolute performance. A campaign that started at $45 CPL and reached $28 CPL over six months through systematic optimization is more impressive than a campaign that maintained $25 CPL with minimal intervention. The former demonstrates active optimization capability; the latter demonstrates good luck or a good starting position.

The failure example is the most revealing. How candidates describe campaigns that went wrong—the specificity, the ownership, the learning—distinguishes operators who treat experience as a teacher from those who treat it as something to hide. Candidates who cannot identify a meaningful failure either have not done enough work to fail significantly or are unwilling to be honest.


The 90-Day Ramp Plan

A structured ramp plan accomplishes two things: it accelerates time-to-productivity by removing ambiguity about expectations and resources, and it creates documented milestones that enable early detection of problems before they become expensive.

Week 1-2: Environment Orientation

Access and systems setup: Platform access with appropriate permission levels, analytics dashboard access, CRM and lead distribution platform orientation, communication tool access, and documented access to historical campaign data.

Historical performance review: Walk through the last 12 months of campaign performance—what worked, what did not, seasonal patterns, buyer feedback that affected optimization decisions. This context prevents the new hire from reinventing analyses that have already been done and repeating mistakes that have already been made.

Stakeholder introductions: Who the buyer communicates with, how often, and through what channels. Internal stakeholders who receive performance reports, buyers who provide quality feedback, vendors who manage platform accounts. The communication infrastructure matters as much as the technical setup.

Deliverable: A written summary of the existing campaign landscape, including the buyer’s initial observations about optimization opportunities. This tests comprehension and reveals analytical instincts before any decisions are made.

Week 3-4: Supervised Execution

Shadow and recommendation mode: The buyer analyzes campaigns and makes optimization recommendations before implementation. Each recommendation is reviewed and either approved or challenged with explanation. This phase catches judgment gaps before they produce losses.

First independent decisions: Low-stakes optimizations executed independently—creative rotation, minor bid adjustments, audience exclusions. Document the rationale for each change in a shared log. Review the log weekly to identify reasoning patterns that need correction.

Error budget: Define explicitly what errors are acceptable during this phase. A $500 mistake from a wrong bid is different from a $5,000 mistake from a structural campaign error. Defining the error budget in advance reduces anxiety for the new hire and sets reasonable expectations.

Deliverable: A 30-day performance analysis comparing the first month of their management against baseline, including their interpretation of what drove changes and what they would do differently.

Month 2: Managed Ownership

Campaign ownership: The buyer takes full ownership of a defined subset of campaigns or one complete channel. Budget changes above a defined threshold (e.g., $1,000 daily) require approval. Testing requires documented hypotheses. Reporting is weekly rather than ad hoc.

First testing cycle: Execute at least two creative tests with documented hypotheses and conclusions. The outcome of the tests matters less than the quality of the process—whether the hypothesis was specific, the test was designed properly, and the conclusion was drawn from sufficient data.

Buyer relationship introduction: If your operation involves direct buyer feedback, introduce the new buyer to that feedback loop. Understanding how downstream quality data flows back into optimization decisions is a skill that takes time to develop. Earlier exposure accelerates the learning.

Deliverable: A 60-day analysis with comparative performance, a log of all tests conducted with results and conclusions, and a written recommendation for the next 30 days.

Month 3: Full Accountability

Full campaign ownership: Complete responsibility for assigned campaigns or channels. Approval thresholds increase. Decisions are documented but not reviewed before execution.

Performance against KPIs: By month three, performance against defined KPIs should be measurable. CPL relative to baseline, lead volume against target, return rate trajectory, and testing velocity are the primary metrics. Secondary metrics—optimization frequency, reporting quality, response time on performance anomalies—assess work quality.

Escalation judgment: Month three tests whether the buyer has developed appropriate escalation instincts—knowing when to handle problems independently versus when to surface them immediately. Early in a ramp, buyers often escalate too much or too little. By month three, calibration should be visible.

Deliverable: A 90-day performance review delivered by the buyer—their own assessment of where they performed, where they underperformed, and what they plan differently in the next 90 days.

KPIs for Assessing First-Hire Performance

By the end of month three, use these metrics to assess whether the hire is on track:

MetricOn TrackNeeds AttentionAt Risk
CPL vs. baselineWithin 10% or better10-20% worse20%+ worse without clear explanation
Testing velocity4+ tests/month2-3 tests/month0-1 tests/month
Return rate trendStable or improving2-3% worse5%+ worse
Lead volume vs. target90-100% of target75-90%Under 75%
Response time on anomaliesSame dayNext day2+ days
Documentation qualityConsistent and clearInconsistentAbsent

No single metric determines the assessment. A buyer who is 15% above CPL target but has caught a quality issue, documented the root cause, and is actively testing a solution demonstrates different capability than a buyer who is 15% above CPL target with no explanation and no testing activity.


When to Hire vs. Outsource

The hire-vs.-outsource question does not have a universal answer. The right structure depends on spend level, internal capability, and operational stage.

Agency Makes Sense When

Spend is below $25,000 monthly. The math rarely supports a full-time hire. An agency or contractor provides fractional expertise while you build toward threshold. Expect to pay $3,000–$8,000 monthly for agency management at this spend level—manageable overhead that does not create permanent employment commitments.

You need to test a new channel. Testing connected TV, programmatic display, or a new social platform without internal expertise is faster and cheaper through an agency that already has the infrastructure. If the channel proves valuable, build in-house. If it does not, you have not created permanent overhead.

You need overflow capacity. Your internal buyer is fully utilized and growth opportunities exceed capacity. Agencies provide elastic capacity that scales with demand without requiring permanent headcount.

In-House Makes Sense When

Spend consistently exceeds $40,000 monthly with positive unit economics. At this level, the economics of a dedicated buyer are unambiguous.

Platform operations are core to your model, not peripheral. If media buying is the primary source of leads and the primary driver of business performance, the institutional knowledge risk of agency dependence is too high. When the agency relationship ends, the knowledge ends with it.

Quality feedback loops require tight integration. Lead generation’s unique dynamic—where buyer return rates and contact rates provide critical optimization signals—works best when the buyer has direct access to that downstream data. Agency models introduce communication layers that slow the feedback loop.

The Hybrid Structure

Most operations above $75,000 monthly spend use some version of this model: one or two core channels managed in-house where institutional knowledge accumulates, specialized channels or overflow managed by agencies. The core channels are where your best data lives and your best optimization opportunities concentrate. The peripheral channels are where outside expertise earns its keep.


Frequently Asked Questions

What is the biggest mistake in a first media buying hire?

Hiring based on platform name-recognition rather than demonstrated analytical capability. Candidates who can recite platform features are not the same as candidates who can diagnose performance problems and execute improvements. The interview process should test thinking under realistic scenarios, not credentials.

How do you evaluate a candidate when you lack platform expertise yourself?

Ask candidates to explain their methodology in plain language. Strong buyers can articulate their process clearly to a non-expert—what signals they look for, why they make specific decisions, what they would do differently with hindsight. If a candidate cannot explain their work clearly, they cannot think through novel situations systematically. Also evaluate reasoning quality in the case study phase; you do not need to know the right answer to assess whether the reasoning is structured and coherent.

Should the first hire be a specialist or generalist?

Hire a generalist with primary expertise in your largest channel. If 70% of spend goes to Meta, hire someone strong on Meta who can competently manage Google as a secondary platform. Pure specialists require channel-specific volume to justify the narrower scope. At the scale where a first hire makes sense, a generalist covers more ground.

Is a junior hire ever the right call for a first media buying hire?

Rarely. A junior buyer needs experienced supervision to develop good judgment. Without an internal expert to learn from, a junior buyer develops whatever habits they develop—and those habits are hard to correct once established. A mid-level hire with 3-5 years of experience handles ambiguity, works more independently, and reaches full productivity faster. The salary premium over junior is recouped within the first quarter through faster ramp and better early decisions.

What happens if the hire does not work out in the first 90 days?

Define clear performance expectations at the start of the ramp—not vague goals, but specific metrics with specific timelines. If performance against those metrics is significantly below threshold by month three, have a direct conversation about what specifically is not working and what improvement looks like. If there is no improvement within 30 days of that conversation, the hire is not going to work. Carry underperformance past 90-120 days and you have paid for four to five months of suboptimal campaign management. The cost of replacement is real but bounded; the cost of continued underperformance compounds.

How much should I budget for tools beyond the hire’s salary?

Budget $200–$500 monthly per buyer for tools: ad intelligence platforms, creative testing tools, landing page optimization software, and reporting infrastructure. Do not economize here. A buyer spending two hours daily on tasks that tools could automate in 20 minutes is a buyer not optimizing campaigns. The productivity gains from proper tooling recoup their cost within weeks.


Key Takeaways

  • The first media buying hire is the hardest. No internal expert to benchmark against, no peer group to learn from, no playbook that makes a mediocre hire survivable. The evaluation and selection process must be rigorous precisely because the stakes are highest with no institutional cushion.

  • Sourcing determines quality ceiling. Posting on generic job boards produces generic candidates. Active sourcing from performance marketing communities, agency alumni networks, and vertical-adjacent companies produces candidates with the specific experience your operation needs.

  • The interview should test thinking, not credentials. Platform familiarity is not the same as analytical capability. Structured case studies, diagnostic questions, and portfolio evaluation reveal how candidates actually work. Credential review alone does not.

  • Compensation benchmarks are $65,000–$120,000 for most first hires depending on market and experience. Underpaying to save cash in the near term is a false economy when you account for the total cost of a failed hire.

  • The 90-day ramp plan is not optional. Improvised onboarding produces improvised results. A structured ramp with defined deliverables at each phase accelerates productivity and creates early detection for hires that are not going to work.

  • Hire vs. outsource is a math problem. Below $25,000 monthly spend, the math rarely supports a full-time hire. Above $40,000 with positive unit economics, it usually does. Between those points, the calculation depends on your operational stage and alternative uses of capital.


Sources


This guide is part of The Lead Economy series on building and operating lead generation businesses.

Industry Conversations.

Candid discussions on the topics that matter to lead generation operators. Strategy, compliance, technology, and the evolving landscape of consumer intent.

Listen on Spotify