PKH · Advertising Performance Audit
3 Problems.
1 Clear Path Forward.

Across $7.00M in advertising spend (App State, ECU, NCCU · Google Search + Facebook · Apr 2024–Mar 2026), we found a system primarily optimized for lead volume, with not enough direct visibility into campaign-level enrollment. Here's what the data shows and how to improve it.

$7.00MTotal Spend Analyzed
2.69xEst. Blended ROAS
Google 3.01x · Facebook 1.30x
⟶ Target: 4x
14.4%Spend Below 1.25x
Viability Floor
⟶ Target: sub 5%
DATA: "By School program channel month.xlsx" (Spend allocation folder) · App State, ECU, NCCU · 45 program-channel pairs · Apr 2024–Mar 2026 · Google Search + Facebook · ROAS modeled: 23–36% submit-to-enroll by program, 60% avg grad rate, SF Submitted counts. Quarterly channel share from "By School program channel month.xlsx" School_Program_Channel_Month sheet. Search terms analysis from Google Ads Search Terms Report Jun 1, 2025–Apr 16, 2026. Quality Score data from Google Ads Keyword report, pulled Apr 2026.
Issue 01 · Channel Mix
Facebook returns 1.30x on ad spend. Google returns 3.01x. Facebook still absorbs 19% of the budget.

Facebook generates leads at $207 CPL versus Google's $328, but converts at a fraction of the rate — 7.6% lead-to-submit versus Google's 23.7%. Facebook's share has declined from 21.3% in Q1 '25 to 14.7% in Q1 '26, which is the right direction. But at $1.35M and 1.30x ROAS, it still absorbs 19% of budget at less than half Google's enrollment return.

1.30x ⟶ Target: 2x+
Facebook ROAS is 1.30x — Google's is 3.01x. Facebook's share is declining but still consumes $1.35M at less than half Google's return.

Facebook generates cheap leads — $207 CPL versus Google's $328. But those leads convert at a much lower rate: only 7.6% of Facebook leads result in an application, versus 23.7% for Google. Cost per SF submit is $3,417 on Facebook versus $1,719 on Google — a 2x gap. Facebook's share has been declining (21.3% in Q1 '25 → 14.7% in Q1 '26), moving in the right direction. But at $1.35M still active, there is meaningful opportunity to accelerate the reallocation toward programs where the return is confirmed.

Google vs. Facebook · Full Funnel Comparison

Source: "PKH Spend Data.xlsx" · By_Channel sheet (CPL, CPSubmit, lead→submit rates) · ROAS modeled from "By School program channel month.xlsx" · ROAS Estimates sheet · Apr 2024–Mar 2026

✓ Google
3.01x
Estimated ROAS
Spend$5.77M (81.1%)
CPL$328
CPSubmit$1,719
Lead → Submit rate23.7%
⚠ Facebook
1.30x
Estimated ROAS
Spend$1.35M (18.9%)
CPL$208
CPSubmit$3,417
Lead → Submit rate7.6%

Facebook Share of Spend by Quarter

Source: "PKH Spend Data.xlsx" (Spend allocation folder) · Spend_by_Campaign_Day sheet · Q1 = Jan–Mar, Q2 = Apr–Jun, Q3 = Jul–Sep, Q4 = Oct–Dec

Q1 '25
21.3%
baseline
Q2 '25
18.9%
↓ −2.4 pts
Q3 '25
16.6%
↓ −4.7 pts
Q4 '25
15.6%
↓ −5.7 pts
Q1 '26
14.7%
↓ −6.6 pts YoY
YoY Spend · Q1 '25 vs Q1 '26
Facebook: −20% ($179K → $144K in Q1)
Google: +26% ($662K → $833K in Q1)
Opportunity at Current Mix
Facebook CPSubmit: $3,417
Google CPSubmit: $1,719 — 2x better
The numbers in full: $1.35M on Facebook returns ~$1.75M in estimated tuition value (1.30x). That same budget allocated to Google Search programs (2.96x ROAS) would return ~$4.0M. The modeled opportunity cost is ~$2.3M in estimated enrollment value — not because Facebook loses money, but because the same investment generates meaningfully more through Search programs that have demonstrated stronger enrollment conversion.
~$2.3MOpportunity cost
vs Google realloc
2.3xGoogle vs FB
ROAS gap
2.0xFB vs Google
CPSubmit gap
Improve Facebook's signal quality. Rebalance budget toward higher-returning programs.
🔁
Do now
Redirect $500K–$800K from Facebook to proven Google Search programs
Shifting $500K–$800K to proven 2x+ Google Search programs could generate an estimated ~$2.3M more in modeled enrollment value at flat total spend. Start with programs that already have the strongest CPSubmit and submit-to-enroll track record.
↑ Est. ~$2.3M additional enrollment value at flat total spend
🎯
Next 60 days
Upgrade Facebook's quality signals — Customer Match and Conversion API (CAPI)
The lead-to-submit gap suggests Facebook's algorithm is optimizing against weak signals. Two upgrades address this directly: Customer Match — seed audiences with enrolled student data so the algorithm finds people with genuine completion intent; Conversion API (CAPI) — pass downstream submit and enrollment events server-side so Facebook's bidding optimizes toward actual enrollment outcomes, not just lead form fills.
↑ Better signal quality → higher-intent leads → improved lead-to-submit rate
🔄
90 days — if ROAS does not improve
Reconsider Facebook for small, localized programs — it likely shouldn't run on every program
Facebook's algorithm requires high lead volume to optimize. Small, localized programs can't generate that volume — which means the system never learns, CPL stays elevated, and ROAS stays weak. This is a consistent pattern across education clients: subscale programs on Facebook don't work. The better channel for these programs is affiliates — degree directories, education marketplaces, and financial aid platforms that reach people already actively looking. Declared intent converts. Broad social targeting at low volume doesn't.
↑ Match channel to program scale — affiliates for subscale, Facebook only where volume supports it
Step 1 of 3
Issue 02 · Account Discipline
$383K — 47% of broad match spend lands on generic or low-intent queries (L1+L2).

L1 generic terms (topic words, community college brands, career searches) and L2 low-mid intent (school-agnostic queries) together account for $383K — 47% of broad match. Beyond direct cost, this mix degrades Quality Score account-wide, raising CPCs on the high-intent terms that do convert.

$383K
Without negative keywords, the same bid reaches someone searching "online psychology degree completion" and someone searching "psychology." The algorithm treats them identically.

Broad match without negative keywords means the accounts are reaching their most likely and least likely to convert with the same bid and budget. Someone searching "online psychology degree completion" and someone searching "psychology" receive the same ad. That mix makes performance harder to read, dilutes spend efficiency, and penalizes Quality Score — which raises CPCs on the high-intent terms worth paying for.

The Mechanism · A Real Example From Your Account

How broad match works — and why it matters here

Your campaigns bid on high-intent keywords like "online psychology degree" — but broad match means those same bids also trigger when someone types a single word like "psychology." Google's algorithm treats them as the same audience. You pay nearly the same price for both.

⚠ Low-Intent Query
"psychology"
A single-word curiosity search. Could be a student, a parent, or someone exploring career fields. No clear signal they want to enroll — or even go back to school.
Matched to: online psychology degree broad
$18.81
avg CPC paid
$206
CPL from query
9.2%
click → lead rate
~1%
est. lead → enroll
189 clicks · $3,555 total spend · 17 leads
✓ High-Intent Query
"online psychology degree"
Someone actively comparing psychology programs online. The search itself signals they want an online option and know they want a degree. Close to submitting an application.
Matched to: online psychology degree exact
$46.05
avg CPC paid
$179
CPL from query
25.7%
click → lead rate
~4%
est. lead → enroll
173 clicks · $7,966 total spend · 44 leads
The core problem

Both queries match the same broad match keyword. The high-intent query earns a higher CPC ($46) because it converts — but the low-intent query still costs $19/click for traffic that rarely converts to a lead. Without negative keywords, the account is paying real money for both, with no way to separate them in reporting. This pattern repeats across every broad match keyword in the account — at scale, across three universities.

What this means at $807K in broad match spend · Google Search, Jun '25–Apr '26 · L1–L5 breakdown
Google Search only · "Search terms report (6).csv" (Search terms folder) · Jun 1, 2025–Apr 16, 2026 · Match type column from Google Ads export · L1–L5 classified by query intent signals
47.4%
of broad match spend on L1+L2 generic queries
⟶ Target: sub 5%
$383K
reaching people not actively seeking a degree
QS 5.01
account avg · poor CTR from irrelevant serves
⟶ Target: QS 7+

Low CTR from irrelevant traffic signals to Google that your ads aren't relevant — which lowers Quality Score account-wide and raises CPCs even on the high-intent terms that do convert.

Where the $807K in Broad Match Spend Actually Goes · L1–L5 Intent Tiers · Jun 1 '25–Apr 16 '26

L1 — Generic
"psychology"
25.5% · $205K
$205,411
L2 — Low-Mid
"psychology school"
22.0% · $177K
$177,283
L3 — Mid
"psychology degree"
13.0% · $105K
$104,635
L4 — High
"online psychology degree"
14.1% · $114K
$113,599
L5 — Partner
"ecu online psychology degree"
25.5% · $206K
$205,631
L1+L2 combined: $383K = 47.4% of broad match — generic topic words, school-agnostic searches, career explorers. No degree intent signal.
✓ High-intent (want to enroll)
"online psychology degree nc" $5,395
"online accounting degree completion" $3,210
"finish bachelor's degree online" $2,840
⚠ Low-intent (not looking to enroll)
"psychology" $4,185
"cpcc" $2,964
"forensic science" $2,535
"wake tech" $1,672
"crime scene investigator" $1,557

Quality Score Distribution · 1,491 Keywords (Spend-Weighted)

Weighted by keyword spend — a QS 3 keyword with $50K in spend counts more than a QS 8 keyword with $1K · Source: "Search keyword report January 1, 2026 - April 17, 2026.csv" · Quality Score folder · 1,491 keywords with QS scores (spend-weighted) across App State, ECU, NCCU, NC A&T

Poor (1–3) · 28.3%
Below avg (4–5) · 32.2%
Average (6) · 17.5%
Good (7–10) · 22.0%
28.3%
32.2%
17.5%
22.0%

60.5% of keywords score below 6. Only 22% score 7 or above. The mechanism is the broad match problem above — irrelevant serve → low CTR → low QS.

69%Expected CTR
Below Average
34.8%Ad Relevance
Below Average
9.8%Landing Page
Below Average
5.01Average QS
(Target: 7+)

Ad Relevance Below Average — by University

NCCU
45.9% below avg
Worst
ECU
38.1% below avg
$2.2M spend
App State
23.8% below avg
$2.0M spend

NCCU — the highest-spend school — has the highest rate of below-average ad relevance. It's also the account where improved alignment would have the most direct impact on CPL and Quality Score.

Block the lowest-intent terms now. Restructure the rest over 60–90 days.

The data tells us exactly what to block. Start there. Once the account is clean, restructure by intent group so bid strategy and ad copy are matched to what each audience is actually looking for.

🚫
Do now · 1–2 weeks
Add a shared negative list across all campaigns — start with the confirmed low-intent terms
The search terms data identifies exactly what to block first. Add immediately: bare subject words with no degree modifier ("psychology," "forensic science," "criminology," "nursing"), career and job-title terms ("crime scene investigator," "homeland security jobs," "healthcare careers"). Block account-wide within one billing cycle.
↑ Immediate reduction in low-intent spend · CPL clarity improves within 30 days
🔬
Ongoing
Weekly search terms review — negative keyword backlog
One keyword alone triggered 15,313 search terms. A shared negative list catches the biggest offenders, but the account will keep generating new low-intent traffic unless someone is reviewing weekly. Scan top search terms by spend each week, add patterns as negatives before the next week's spend runs.
↑ Prevents re-pollution after the initial cleanup
🏗
60–90 days
Restructure account by intent group — competitors, general degree, completion-specific
Not all mid-intent traffic should be blocked. Competitor brand + admissions terms, general degree searches, and program-specific queries each warrant a different bid level and ad message. Restructure so each ad group serves one intent. Assign bids, budgets, and copy to match. This is what enables smart bidding to work — and what gets Quality Score to 7+.
↑ QS 7+ becomes achievable · unlocks CPSubmit bidding strategy
Step 1 of 4
Issue 03 · Program Allocation
Budget could be better allocated based on enrollment potential and ROAS. We can fix that.

Budget decisions appear to be more weighted to CPL targets than enrollment outcomes. Improved data integrity and reporting would make these optimizations more seamless and drive better discussions with partners on the tradeoffs. The data to support this already exists.

71.6% → 63.2%
The strongest programs are shrinking as a share of platform leads. In one year, the 2x+ tier dropped from 71.6% of leads to 63.2% — while sub-1x programs grew from 7.2% to 24.2% of leads.

In Q1 2025, programs returning 2x or better generated 71.6% of platform leads. By Q1 2026, that share had fallen to 63.2% — while the sub-1x tier grew from 7.2% to 24.2% of leads in the same period. The portfolio is generating a growing share of its leads from its worst-performing programs.

Share of Platform Leads by ROAS Tier · Q1 2025 – Q1 2026

Each bar = 100% of quarterly leads. The sub-1x slice (red) grew from 7.2% to 24.2% of leads in one year. CPSubmit band view excludes NCCU MBA (Google Search) — its $12,679 CPSubmit skewed the band distribution. Source: "By School program channel month.xlsx" · School_Program_Channel_Month sheet · ROAS modeled

Lead Volume Moved in the Wrong Direction · Q1 '25 → Q1 '26

Jan–Mar 2025 vs Jan–Mar 2026 · "By School program channel month.xlsx" · School_Program_Channel_Month sheet · ROAS modeled

↓ Leads lost — strong ROAS programs cut

NCCU · Health Admin · Google
ROAS 2.58x 285 → 19 leads ▼ −93%
ECU · Criminal Justice · Google
ROAS 3.94x 152 → 106 leads ▼ −30%
APPSTATE · Org Leadership · Google
ROAS 2.49x 108 → 73 leads ▼ −32%

↑ Leads gained — weak ROAS programs launched

NCCU · MBA · Google
ROAS 0.37x 0 → 285 leads ▲ new launch
APPSTATE · HCM · Facebook
ROAS 0.51x 57 → 58 leads → flat
APPSTATE · Supply Chain · Facebook
ROAS 0.68x 33 → 32 leads → flat

Program-Level Drill · Monthly Leads & CPSubmit

Sorted by total leads · NCCU MBA shown first

CPSubmit band Overall CPSubmit: ROAS bucket:
Use data to bring partners to the decisions that drive better enrollment per dollar spent.

The data to make better allocation decisions exists — it just isn't connected or visible enough to act on. We can build the system and the shared framework that gives PKH and its university partners a common view of performance, so the right budget decisions become obvious rather than contested.

Do now
Start the conversation with data — show partners what the ROAS spread looks like
The sub-1x programs aren't a secret — they just haven't been presented in a way that makes the tradeoff clear. Sharing a simple program-level ROAS view with partners opens the door to reallocation without it feeling like a unilateral cut. When partners can see that $1 spent on Program A returns $3.94 and Program B returns $0.37, the right answer tends to surface on its own.
↑ Shared visibility → partners arrive at the right decisions with you
📐
Next 60 days
Build a monthly allocation model that makes the performance tiers the basis for every budget conversation
Score each program by ROAS tier. Assign budget multipliers: 2x+ grows, 1.5–2x holds, 1–1.5x is reviewed together, sub-1x is restructured or paused. When allocation decisions are anchored to a shared model — not judgment calls — partners are more likely to accept reductions on low performers because the framework, not PKH, is making the recommendation.
↑ Data-driven framework replaces friction-heavy budget negotiations
🔗
Longer-term engagement
Build the enrollment intelligence system that keeps everyone accountable to the same numbers
Right now, ROAS figures are estimated. The longer-term opportunity is a live pipeline — clicks to applications, applications to enrollments, enrollments to tuition revenue — that gives PKH and its partners a single source of truth. When partners can see their program's performance in real time, allocation conversations shift from opinion to evidence. We propose building and operating that system with eCue: engineering the data pipeline, running monthly allocation reviews, and helping PKH lead partners toward the outcomes the data supports.
↑ Real-time shared data makes the right answer undeniable
Step 1 of 3
The Path Forward
Three workstreams. Prioritized by impact and speed.

The problems are layered but the fixes are sequenced. Quick wins reduce waste immediately. Structural work unlocks performance-based optimization. Data engineering makes the whole system self-improving over time.

Quick Wins · Weeks 1–4

Reduce low-intent spend. Shift budget toward higher-returning programs.

Reallocate $500K–$800K from Facebook to Google Search Facebook is above breakeven (1.30x) but generates less than half Google's 2.96x return. Same total spend, ~$2.2M more in estimated enrollment value.
→ Fastest ROI of any action available
Add shared negative keyword lists + weekly search term review Block low-intent, career, and community college terms. Shared negatives clean up the signal immediately; weekly review prevents re-accumulation.
→ CPL improvement, QS gains, cleaner performance signal
Pause 5 lowest-ROAS programs Programs below 0.5x ROAS. Recover budget and redirect to 2x+ programs.
→ Immediate waste reduction
Bigger Wins · 60–120 Days

Rebuild the machine to optimize for what matters.

Restructure Google Search accounts One ad group per intent cluster, program-level campaigns, tight match types. Unlocks Quality Score 7+ and smart bidding.
→ CPL premium reduction, algorithm alignment
Shift bid strategy to CPSubmit target Once structure supports it, move from CPL bidding to submit-event optimization — aligning the algorithm with actual funnel outcomes.
→ Bidding aligned to enrollment funnel
Build monthly program allocation model ROAS-based waterfall budgeting reviewed each month. Grows high-returning programs, reduces low-returning ones systematically.
→ Same spend, better enrollment mix
Data Engineering · Ongoing

Make the system self-improving with real data.

Build end-to-end measurement pipeline Clicks → Archer leads → SF submits → enrollment → graduation-adjusted tuition. ROAS becomes a real signal, not an estimate.
→ Closes the measurement gap entirely
Implement CRM-based conversion tracking Import SF enrollment events into Google Ads so the algorithm optimizes toward actual enrollment outcomes, not just form fills.
→ Platform optimization aligned to real outcomes
Reconcile Archer/SF attribution gap 14.6% lead-to-submit join rate needs investigation. Fix this before any measurement-driven optimization is reliable.
→ Foundation for everything above
Data Context
What we analyzed and what we estimated.

The analysis is grounded in platform data and CRM records. Enrollment and revenue figures are modeled — they should be treated as directional, not actuals, until the full pipeline is built.

Platform & CRM Data — Verified
Ad Spend (Google + Facebook)$7.00M
Platform Leads25,843
App Submitted Applications4,706
Google Quality Score Data1,491 keywords
Channel / Program / School splitsComplete
Modeled / Estimated — Directional
Submit → Enroll Rate23–36% by program and school
Graduation Rate60% assumed uniform
Est. EnrollmentsModeled from submits
Est. ROASDirectional · not actuals
Archer → SF Lead JoinFull match rates need investigation
Data Hygiene & Accessibility Assessment
Area
Rating
Notes
Spend → lead attribution
⚠ Partial
46% match rate; gap due to inconsistent campaign naming across platforms and time periods
Lead → application linkage
⚠ Partial
Application outcomes tied to leads via Archer join, but Banner ID coverage limits SF confirmation to ~17%
Cross-system identity
✕ Not available
Archer and PKH/Salesforce use different SF org IDs; no person-level bridge exists between the two
Channel & UTM attribution
⚠ Partial
31% of leads arrive with no UTM source; keyword data present for Google only and covers ~24% of leads
Program data
✓ Complete
Program names fully resolved via crosswalk; required manual mapping to bridge inconsistent code formats
Date consistency
⚠ Inconsistent
Campaign IDs stored in the campaign name field during Nov 2024–May 2025; required detection logic to recover
File format consistency
⚠ Inconsistent
Application files delivered in mixed formats (xlsx and csv) with different column names and ID formats across institutions
Data freshness
⚠ Unclear
Lead file appears to be a point-in-time snapshot; no refresh cadence established
What best-in-class looks like
Standardized attribution

Consistent UTM governance and campaign ID conventions across all platforms. No manual cleaning — every lead traceable to its source channel, program, and spend line item automatically.

A connected enrollment funnel

A single identity layer links leads to applicants to enrollments across Archer and Salesforce. Timestamped milestones at every stage — contactability, application start, submission, enrollment — make the student journey readable end to end.

Real ROAS. Real decisions.

Estimated ROAS replaced by actuals. Allocation decisions grounded in what programs genuinely return — not modeled assumptions. A live system that improves the more it runs.