This document is password protected. Please enter the password to continue.
Across $7.00M in advertising spend (App State, ECU, NCCU · Google Search + Facebook · Apr 2024–Mar 2026), we found a system primarily optimized for lead volume, with not enough direct visibility into campaign-level enrollment. Here's what the data shows and how to improve it.
Facebook generates leads at $207 CPL versus Google's $328, but converts at a fraction of the rate — 7.6% lead-to-submit versus Google's 23.7%. Facebook's share has declined from 21.3% in Q1 '25 to 14.7% in Q1 '26, which is the right direction. But at $1.35M and 1.30x ROAS, it still absorbs 19% of budget at less than half Google's enrollment return.
Facebook generates cheap leads — $207 CPL versus Google's $328. But those leads convert at a much lower rate: only 7.6% of Facebook leads result in an application, versus 23.7% for Google. Cost per SF submit is $3,417 on Facebook versus $1,719 on Google — a 2x gap. Facebook's share has been declining (21.3% in Q1 '25 → 14.7% in Q1 '26), moving in the right direction. But at $1.35M still active, there is meaningful opportunity to accelerate the reallocation toward programs where the return is confirmed.
Google vs. Facebook · Full Funnel Comparison
Source: "PKH Spend Data.xlsx" · By_Channel sheet (CPL, CPSubmit, lead→submit rates) · ROAS modeled from "By School program channel month.xlsx" · ROAS Estimates sheet · Apr 2024–Mar 2026
Facebook Share of Spend by Quarter
Source: "PKH Spend Data.xlsx" (Spend allocation folder) · Spend_by_Campaign_Day sheet · Q1 = Jan–Mar, Q2 = Apr–Jun, Q3 = Jul–Sep, Q4 = Oct–Dec
L1 generic terms (topic words, community college brands, career searches) and L2 low-mid intent (school-agnostic queries) together account for $383K — 47% of broad match. Beyond direct cost, this mix degrades Quality Score account-wide, raising CPCs on the high-intent terms that do convert.
Broad match without negative keywords means the accounts are reaching their most likely and least likely to convert with the same bid and budget. Someone searching "online psychology degree completion" and someone searching "psychology" receive the same ad. That mix makes performance harder to read, dilutes spend efficiency, and penalizes Quality Score — which raises CPCs on the high-intent terms worth paying for.
The Mechanism · A Real Example From Your Account
How broad match works — and why it matters here
Your campaigns bid on high-intent keywords like "online psychology degree" — but broad match means those same bids also trigger when someone types a single word like "psychology." Google's algorithm treats them as the same audience. You pay nearly the same price for both.
Both queries match the same broad match keyword. The high-intent query earns a higher CPC ($46) because it converts — but the low-intent query still costs $19/click for traffic that rarely converts to a lead. Without negative keywords, the account is paying real money for both, with no way to separate them in reporting. This pattern repeats across every broad match keyword in the account — at scale, across three universities.
Low CTR from irrelevant traffic signals to Google that your ads aren't relevant — which lowers Quality Score account-wide and raises CPCs even on the high-intent terms that do convert.
Where the $807K in Broad Match Spend Actually Goes · L1–L5 Intent Tiers · Jun 1 '25–Apr 16 '26
Quality Score Distribution · 1,491 Keywords (Spend-Weighted)
Weighted by keyword spend — a QS 3 keyword with $50K in spend counts more than a QS 8 keyword with $1K · Source: "Search keyword report January 1, 2026 - April 17, 2026.csv" · Quality Score folder · 1,491 keywords with QS scores (spend-weighted) across App State, ECU, NCCU, NC A&T
60.5% of keywords score below 6. Only 22% score 7 or above. The mechanism is the broad match problem above — irrelevant serve → low CTR → low QS.
Ad Relevance Below Average — by University
NCCU — the highest-spend school — has the highest rate of below-average ad relevance. It's also the account where improved alignment would have the most direct impact on CPL and Quality Score.
The data tells us exactly what to block. Start there. Once the account is clean, restructure by intent group so bid strategy and ad copy are matched to what each audience is actually looking for.
Budget decisions appear to be more weighted to CPL targets than enrollment outcomes. Improved data integrity and reporting would make these optimizations more seamless and drive better discussions with partners on the tradeoffs. The data to support this already exists.
In Q1 2025, programs returning 2x or better generated 71.6% of platform leads. By Q1 2026, that share had fallen to 63.2% — while the sub-1x tier grew from 7.2% to 24.2% of leads in the same period. The portfolio is generating a growing share of its leads from its worst-performing programs.
Share of Platform Leads by ROAS Tier · Q1 2025 – Q1 2026
Each bar = 100% of quarterly leads. The sub-1x slice (red) grew from 7.2% to 24.2% of leads in one year. CPSubmit band view excludes NCCU MBA (Google Search) — its $12,679 CPSubmit skewed the band distribution. Source: "By School program channel month.xlsx" · School_Program_Channel_Month sheet · ROAS modeled
Lead Volume Moved in the Wrong Direction · Q1 '25 → Q1 '26
Jan–Mar 2025 vs Jan–Mar 2026 · "By School program channel month.xlsx" · School_Program_Channel_Month sheet · ROAS modeled
↓ Leads lost — strong ROAS programs cut
↑ Leads gained — weak ROAS programs launched
Program-Level Drill · Monthly Leads & CPSubmit
Sorted by total leads · NCCU MBA shown first
The data to make better allocation decisions exists — it just isn't connected or visible enough to act on. We can build the system and the shared framework that gives PKH and its university partners a common view of performance, so the right budget decisions become obvious rather than contested.
The problems are layered but the fixes are sequenced. Quick wins reduce waste immediately. Structural work unlocks performance-based optimization. Data engineering makes the whole system self-improving over time.
The analysis is grounded in platform data and CRM records. Enrollment and revenue figures are modeled — they should be treated as directional, not actuals, until the full pipeline is built.
Consistent UTM governance and campaign ID conventions across all platforms. No manual cleaning — every lead traceable to its source channel, program, and spend line item automatically.
A single identity layer links leads to applicants to enrollments across Archer and Salesforce. Timestamped milestones at every stage — contactability, application start, submission, enrollment — make the student journey readable end to end.
Estimated ROAS replaced by actuals. Allocation decisions grounded in what programs genuinely return — not modeled assumptions. A live system that improves the more it runs.