Advances in Consumer Research
Issue 4 : 3858-3870
Research Article
Quality-First Marketing in Higher Education: Insights from Select Cities of Uttar Pradesh
 ,
1
Research Scholar, School of Business Management Assistant Professor, Department of Computer Application, School of Engineering & Technology (UIET) Chhatrapati Shahu Ji Maharaj University (CSJMU), Kanpur (UP).
2
Professor, School of Business Management (SBM), Chhatrapati Shahu Ji Maharaj University (CSJMU), Kanpur (UP)
Received
Aug. 5, 2025
Revised
Aug. 16, 2025
Accepted
Sept. 8, 2025
Published
Sept. 15, 2025
Abstract

This study investigates how perceived quality dimensions influence the marketing outcomes of higher-education institutions in Uttar Pradesh. Building on services-marketing and choice theory, we develop a structural model in which service quality (reliability, responsiveness, assurance, empathy, tangibles), academic reputation, faculty quality, placement outcomes, campus infrastructure, digital engagement and social proof, price–value perception, and word-of-mouth jointly influence prospective students’ enrollment intent. The model incorporates student satisfaction and perceived institutional fit as mediators, and tests moderation by city tier (NCR vs non-NCR) and household income. A cross-sectional survey of prospective UG/PG students and parents across major cities (e.g., Lucknow, Kanpur, Varanasi, Prayagraj, Agra, Meerut, Ghaziabad/Noida, Bareilly) is analyzed using PLS-SEM with bootstrapping, multi-group analysis (MICOM for invariance), and Importance–Performance Map Analysis to prioritize actionable levers. The framework enables institutions to quantify pathway effects (e.g., placements → satisfaction → intent), benchmark digital touchpoints against price–value perceptions, and tailor campaigns to city-tier heterogeneity. The paper contributes a validated measurement instrument for the UP context and a decision toolkit that links academic-quality signals to market outcomes, informing segmentation, positioning, and budget allocation for higher-education marketers.

Keywords
INTRODUCTION

Higher education in India has entered a decisive phase of expansion and differentiation. As enrolments grow and new institutions compete for attention, marketing is no longer peripheral; it is integral to how universities and colleges signal quality, build trust, and convert interest into applications and enrollments. Nowhere is this more visible than in Uttar Pradesh (UP)—a state with diverse city archetypes ranging from NCR-adjacent urban hubs (Ghaziabad/Noida) to historic academic centers (Lucknow, Varanasi, Prayagraj) and industrial–commercial cities (Kanpur, Agra, Meerut, Bareilly). Prospective students and their families in these cities balance aspirations for reputable degrees and strong placement records against constraints such as affordability, proximity, and perceived institutional fit[1]. In such a noisy marketplace, quality signals determine whether marketing messages resonate or are dismissed as generic advertising.

 

This study examines how quality-related dimensions shape higher-education marketing in key cities of Uttar Pradesh. Our core premise is straightforward: effective marketing outcomes (awareness → enquiry → application → enrollment intent) depend less on communication volume and more on how well campaigns amplify credible, perceivable quality. To operationalize “quality,” we consider widely accepted domains—service quality (reliability, responsiveness, assurance, empathy, tangibles), academic reputation, faculty quality, placement outcomes and industry linkages, campus infrastructure, digital engagement and social proof, price–value perception, location and accessibility, and word-of-mouth. We also recognize two critical psychological bridges—student satisfaction and perceived institutional fit—through which quality factors translate into concrete decisions.

 

Table 1 consolidates these constructs, clarifies their roles, and offers examples that are measurable and actionable in institutional marketing; we reference this schema throughout the paper to maintain conceptual clarity (see Table 1).

 

Table 1. Conceptual constructs for quality-led higher-education marketing (definitions, example indicators, expected effect on Enrollment Intent).

Construct

Conceptual role

Illustrative indicators (survey items/examples)

Expected effect on EI

Service Quality (SQ)

Experiences around enquiry, counseling, administration, student support

“Queries resolved quickly”; “Staff are courteous and clear”; “Processes are hassle-free”; “Facilities are well-maintained”

Positive (direct and via Satisfaction)

Academic Reputation (AR)

Perceived standing of programs, research, accreditations, alumni

“Well-regarded by employers/academics”; “Strong outcomes for graduates”; “Recognized accreditations/ratings”

Positive (via Satisfaction & Fit)

Faculty Quality (FQ)

Teaching depth and industry linkage

“Experienced faculty”; “Mentorship access”; “Guest lectures/industry mentors”

Positive (via Satisfaction & Fit)

Placement & Industry Linkages (PL)

Employability signal

“Transparent placement stats”; “Active recruiter tie-ups”; “Internship opportunities”

Strong positive (via Satisfaction & Price–Value)

Campus Infrastructure (CI)

Learning and living environment

“Modern labs/library/sports”; “Safe hostels”; “Accessible campus”

Positive (via Satisfaction)

Digital Engagement & Social Proof (DE)

Findability, credibility, and clarity online

“Useful website/app”; “Authentic reviews/testimonials”; “Responsive social channels”

Positive (direct and as amplifier of other factors)

Price–Value Perception (PV)

Affordability and fairness of fees vs benefits

“Worth the fees”; “Scholarships/EMIs available”; “Transparent fee breakup”

Positive (direct and via Satisfaction)

Location & Accessibility (LA)

Travel time/safety/proximity

“Convenient commute”; “City safety”; “Transit connectivity”

Mixed positive (context dependent)

Word-of-Mouth (WOM)

Community endorsement

“Friends/relatives/coaches recommend this institution”

Positive (direct)

Perceived Institutional Fit (FIT)

Match between student goals and program

“Program aligns with my interests/career path”

Strong positive (direct)

Student Satisfaction (SAT)

Overall appraisal integrating experiences

“Overall, I am satisfied with this institution”

Strong positive (direct)

 

Why a focused study on UP cities? First, UP presents heterogeneous consumer contexts within one administrative state. City-tier differences (e.g., NCR-adjacent vs non-NCR) can shift the salience of digital touchpoints: in NCR corridors, parents and students often consult sophisticated online reviews and program pages, while in other cities offline cues—campus visits, counselor guidance, coaching-center endorsements—may weigh more heavily. Second, price sensitivity and financing (scholarships, EMIs, fee transparency) frequently moderate choice; what looks like a modest price premium in a metro may be a decisive barrier elsewhere. Third, household decision dynamics vary: in some families, the student’s perceived academic “fit” dominates; in others, placement track record and safety/infrastructure become veto criteria. Marketing cannot control every determinant, but it can prioritize quality stories that align with local expectations.

 

In this environment, three challenges confront institutional marketers:

From claims to credibility. Audiences increasingly discount broad slogans (“world-class,” “industry-ready”) unless backed by verifiable evidence—e.g., named employer partnerships, published internship counts, transparent placement dashboards, faculty profiles, lab inventories, third-party accolades, and alumni outcomes. Quality stories must be concrete, comparable, and current to build trust.

 

From features to fit. Students seek self-relevance: program–interest match, pedagogy style, class size, mentorship, language support, and extracurricular culture. Communicating quality therefore requires mapping institutional strengths to student archetypes (STEM-focused, management-oriented, liberal arts-seeking; local commuter vs hostel resident; first-generation learner), not just listing features.

 

From reach to resonance. Media plans that maximize impressions may underperform if message–market alignment is weak. In UP’s mixed media landscape, digital engagement (website UX, mobile responsiveness, content depth, SEO visibility, credible reviews, social media narratives) must complement place-based outreach (school visits, city fairs, coaching institutes, community events). The return on spend improves when content elevates verifiable quality and addresses price–value trade-offs explicitly (scholarships, assistantships, loan partners).

 

Anchored in these challenges, our study pursues four questions:

RQ1: Which quality factors most strongly shape enrollment intent in the UP context?

RQ2: Through which pathways (e.g., via satisfaction and perceived fit) do these factors exert influence?

RQ3: How do city-tier differences (NCR vs non-NCR) and household income moderate these effects?

RQ4: Which levers emerge as high-importance/low-performance in an Importance–Performance lens, guiding near-term marketing priorities?

 

Methodologically, we employ a cross-sectional survey of prospective UG/PG students and parents in Lucknow, Kanpur, Varanasi, Prayagraj, Agra, Meerut, Ghaziabad/Noida, and Bareilly. The instrument captures perceptions of the constructs listed in Table 1, recent exposure to institutional communications, and self-reported enrollment intent. We analyze data using PLS-SEM to model multi-path effects among reflective constructs, test mediation via satisfaction/fit, and examine moderation by city tier and income. We also perform multi-group analysis to verify measurement comparability and extract managerial priorities via an Importance–Performance Map of enrollment intent.

 

The study’s contributions are threefold. First, it offers a context-specific measurement model linking quality perceptions to marketing outcomes for UP cities, accommodating the realities of mixed digital/offline touchpoints and variable price sensitivity. Second, it provides diagnostic visibility into mediating and moderating mechanisms, clarifying where to invest: for instance, whether placement disclosure or faculty visibility more effectively boosts satisfaction and fit, and how digital engagement differentially matters across city tiers. Third, it translates findings into a decision toolkit for admissions and marketing teams—prioritizing efforts that demonstrably raise enrollment intent within budget constraints.

 

Finally, the paper underscores a normative stance: quality-first marketing is not cosmetic. It relies on authentic academic and service improvements, made legible through transparent, user-centered communication. By tying messaging to verifiable outcomes—curriculum rigor, industry partnerships, student support, affordability—institutions cultivate long-term reputation rather than short-lived campaigns. The UP context, with its combination of scale, diversity, and mobility, is an ideal proving ground for this approach.

RELATED WORKS

Research on higher-education marketing has developed along a number of converging streams that collectively explain mechanisms by which prospective students and families understand "quality" and transform it into action. The first stream describes higher education as a service experience with multiple points of contact. In the early phase of the decision cycle - awareness, enquiry and campus visit - the credibility of information, responsiveness to enquiries, staff reassurance and empathy, and the physicality of the facilities influence decision making. In practice, these cues almost never work alone: the same conversation with an admissions executive can change perceptions of administrative quality, affordability, and institutional fit [2-4]. A second stream focuses on academic reputation as a market signal. Reputation takes years to build up with respect to faculty quality, curriculum strength, research presence, and alumni outcomes; students are often exposed to it indirectly through rankings, accreditations, visible employer linkages, and word-of-mouth. A third stream focuses on employability (placements, internships, industry linkages) which acts as a value proposition and also a credibility check. Even families that are most focused on campus life and campus safety often use placement transparency as a go/no-go or veto factor. A fourth stream focuses on price-value perception: perceived fairness of fees in relation to benefits; the availability of scholarships or EMIs; and transparency in ancillary costs. Finally, a rapidly expanding stream centers on digital engagement and social proof (usability of the website, level of content, SEO visibility, stories from students, narratives of the platform) that are increasingly influencing first impressions before any offline interaction [5-8].

 

While each stream provides partial explanations, they have three weaknesses in practice: First, they are frequently operationalized in silos - service quality audit separated from digital analytics, placement dashboard separate from narrative marketing - and the result is fragmented insights. Second, they give too much weight to direct effects, and downplay the role of mediators (e.g., satisfaction, perceived fit), which translate quality perceptions into actual intent. Families almost never "purchase" a single attribute; they weave together a set of cues into a meaningful narrative of fit and value. Third, they under-specify contextual moderators (specifically city-tier and household income) that can reverse effect sizes. For example, a small premium on fees may be tolerable in NCR corridors where internships are many, but a liability for non-NCR cities unless balanced by generous scholarships or hostel or safety.

 

These gaps are exacerbated by the UP context. Audiences use different information gateways so the behavior of the same campaign can vary a lot between Ghaziabad/Noida and Prayagraj or Bareilly. While in traditional academic cities such as Lucknow and Varanasi the reputation and visibility of faculty matters more, in industrial-commercial centres like Kanpur and Agra, placements and industry projects are more important. The salience of location and infrastructure is further mediated by differences in commuting patterns, gender norms around mobility and hostel availability. Against this background, there is a need for an integrative perspective that links service experiences, reputational cues, evidence of employability, credibility in a digital environment and affordability together in an explanatory framework with clear mediators and moderators.

 

In a complementary marketing view, the decision path can be reconstructed as a quality-based funnel. The top of the funnel is built on findability (SEO, search ads), credibility (third party validations, consistent fee disclosures), and clarity (program details, results). Mid-funnel conversion is based on responsiveness (speed and substance of replies), quality of the counseling (personalization, clarity of trade-offs in a confidential environment) and campus proof (lab, housing, safety walk-throughs). Bottom-funnel conversion is anchored by financial convenience (scholarships/EMIs), social construct (alumni testimonials, peer WOM), and fit stories (how a program will fit the student's plan). At all stages, digital communication increases or decreases the perceived authenticity in quality claims: a well-organized program page with a proven outcome improves behavior more than generic brand messaging [17-19].

 

There are two more gaps that come up again in measurement. First, many studies are based on vanity metrics (impressions, clicks) that don't correlate with changes in perceived value or intent. Second, many instruments do not have tests of measurement invariance across sub-groups, which complicates cross-city comparisons. To address these concerns, we (i) model mediating relationships through satisfaction and perceived fit; (ii) model digital engagement as both an independent driver and amplifier for other variables; and (iii) test moderation by city tier and income to show where strategies need to vary. Table 2 provides a synthesis of the major knowledge streams, the fundamental propositions of the stream, how they are often measured, the pitfalls in the UP setting, and the implications that shape the model and analytics of the current study (see Table 2).

 

Table 2. Synthesis of quality-driven marketing streams, typical measures, UP-specific pitfalls, and implications for the present study.

Stream/theme

Core proposition

Typical measures in practice

UP-specific pitfalls

Implications for this study

Service experience

Reliable, responsive, empathetic services raise satisfaction and intent

Query response time, resolution quality, counseling ratings, facility upkeep checklists

Fragmented delivery across campuses; counselor variability; queueing during peak season

Model as a driver of Satisfaction; capture multiple touchpoints; test city differences in service expectations

Academic reputation & faculty

Reputation signals program quality; visible faculty boost credibility

Accreditations, awards, faculty profiles, alumni achievements

Reputation proxies uneven across cities; families rely on local heuristics

Include Reputation and Faculty Quality; examine indirect effects via Fit and Satisfaction

Employability (placements)

Transparent outcomes and linkages increase perceived value

Recruiter lists, internship counts, salary bands, project tie-ups

Inconsistent disclosure; skepticism about inflated claims

Treat Placements as high-importance; test mediation via Price–Value and Satisfaction

Price–value & financing

Fair, transparent fees and financing reduce friction

Fee breakdowns, scholarship/EMI availability, TCO calculators

Hidden add-ons; limited EMI literacy beyond metros

Include Price–Value as direct + mediated driver; test income moderation

Digital engagement & social proof

Findable, credible, content-rich digital presence amplifies quality signals

SEO rank, task completion on website, review sentiment, social engagement quality

Overreliance on vanity metrics; mixed device connectivity across cities

Model Digital Engagement as driver and amplifier; connect to both Price–Value and Intent

Location & accessibility

Proximity, safety, and commute feasibility matter

Commute time, transit access, hostel safety scores

City-specific mobility norms; gendered safety concerns

Include Location with context-dependent effect; check moderation by city tier

Word-of-mouth

Peer/parental endorsements shape trust

Referral counts, counselor/coaching influence

Local echo chambers; counselor incentives vary

Treat WOM as a direct path to Intent; assess variance by city

 

Our approach builds on this synthesis in four ways. First, we integrate the streams into a single structural model in which quality factors (service quality, academic reputation, faculty quality, placements, campus infrastructure, digital engagement, price–value, location, word-of-mouth) influence enrollment intent directly and indirectly through satisfaction and fit. This reflects how families actually reason—assembling multiple, partly redundant cues into a confidence threshold for decision making. Second, we explicitly treat digital engagement and social proof as a cross-cutting amplifier, improving the effectiveness of other signals; for example, placement claims gain traction when backed by named recruiters and verifiable internship counts on official pages. Third, we embed context by comparing NCR vs non-NCR cities and by examining income bands; the same path coefficient can differ meaningfully across groups, and campaign design should reflect this heterogeneity. Fourth, we translate statistical findings into managerial priorities via an Importance–Performance lens, highlighting where to allocate effort (e.g., if placements show high importance but low perceived performance in Kanpur and Meerut, targeted recruiter partnerships and transparent dashboards become near-term priorities).

 

In short, the related work points toward a unifying view: higher-education marketing is most effective when it makes quality legible—by exposing authentic, comparable evidence—while tailoring message and channel strategy to local contexts and affordability constraints. By embedding mediators and moderators within a single model and by prioritizing actionable diagnostics, the present study responds directly to the fragmentation that limits earlier practice. The expected payoff is twofold: clearer theoretical attribution of how quality shapes intent, and a decision toolkit that institutions can use to plan campaigns in specific UP cities with discipline.

RESEARCH METHODOLOGY

Research design and overview

This study adopts a cross-sectional, explanatory design to quantify how perceived quality factors shape enrollment intent for higher-education institutions in select cities of Uttar Pradesh. We operationalize a structural model in which service quality, academic reputation, faculty quality, placement outcomes, campus infrastructure, digital engagement & social proof, price–value perception, location & accessibility, and word-of-mouth act as exogenous drivers. Student satisfaction and perceived institutional fit function as mediators, while city tier (NCR vs. non-NCR) and household income are tested as moderators. The model is estimated with PLS-SEM due to its suitability for prediction-oriented analysis, complex models with multiple mediators, and distributional robustness.

 

Study area, population, and sampling

The study covers eight urban centers representing UP’s diversity: Lucknow, Kanpur, Varanasi, Prayagraj, Agra, Meerut, Ghaziabad/Noida (NCR), and Bareilly. The target population comprises (i) prospective UG students (Class 12/final-year school), (ii) prospective PG students (final-year UG), and (iii) parents/guardians who influence decisions. We implement a multi-stage, stratified cluster approach: cities form primary strata; within each city we stratify by institution type (public/private/deemed) and discipline clusters (engineering/management/general/other professional). Within clusters, quota sampling ensures minimum representation of gender, program interest, and decision role (student vs parent).

 

Planned sample size. For PLS-SEM, we target N ≈ 600 to satisfy the “10-times” heuristic (largest number of formative indicators or structural paths into a construct) and to permit multi-group comparisons (city tier, income) with adequate power. A practical allocation is 70–85 respondents per city, oversampling Ghaziabad/Noida to support moderation by city tier.

 

Instrument development and operationalization

A structured questionnaire measures all constructs with Likert-type items (7-point: 1=Strongly Disagree to 7=Strongly Agree). Items were bilingually authored (English/Hindi) and back-translated to ensure semantic equivalence. Scales were contextualized to the UP setting (e.g., clarity of fee breakup, hostel safety, website task success, counselor responsiveness). Pretesting with ~30 respondents checked comprehension time, item clarity, and response variability; minor edits refined wording and ordering.

 

Construct specification. Except where explicitly noted, constructs are treated as reflective. “Digital Engagement & Social Proof” can be modeled (optionally) as a composite formative index if the institution wishes to weight channels (website UX, reviews, social posts) differently; in the baseline model we keep it reflective for parsimony. Table 3 lists each construct, role, measurement type, number of items, scale, and a sample indicator to guide implementation (see Table 3).

 

Data collection procedure and ethics

Data are collected via a hybrid mode: (i) offline intercepts at school fairs, coaching centers, and campus info sessions; (ii) online through institution partners, targeted student groups, and counselor lists. Participation is voluntary with informed consent. The survey does not collect personally identifying information beyond optional email for follow-up (stored separately); demographic fields use bands (age, income) to protect privacy. The protocol includes a short attention check and an option to skip sensitive questions. Field teams undergo brief training to ensure neutral administration and avoid leading responses.

 

Data preparation and quality control

We screen responses for (i) completion (≥90% answered), (ii) straight-lining (very low variance across Likert items), and (iii) response time (below 25th percentile flagged for review). Missing values ≤5% per item are imputed with median within city-tier strata; items exceeding this threshold are inspected and, if necessary, dropped with documentation. Outliers are evaluated using robust Mahalanobis distance in the indicator space; suspicious cases are reviewed against attention checks. To limit common method bias, the instrument includes a neutral marker block (unrelated items) and psychological separation (different sections and anchors). Full collinearity VIFs are examined; values <3.3 are targeted.

 

Measurement model assessment

  • Reflective indicators are evaluated on standard criteria:
  • Indicator loadings ≥0.70 (retain ≥0.60 if theoretically essential and if AVE/CR remain acceptable).
  • Internal consistency via Composite Reliability (CR) ≥0.70.
  • Convergent validity via Average Variance Extracted (AVE) ≥0.50.
  • Discriminant validity via HTMT <0.85 (strict) or <0.90 (lenient).

 

For the optional formative specification of Digital Engagement, we inspect indicator weights and loadings, VIF (<3), and significance via bootstrapping. Measurement invariance across NCR vs non-NCR groups is tested in three steps: configural (same items/algorithm), compositional invariance (permutation checks), and equality of means/variances; passing at least partial invariance permits meaningful path comparisons.

 

Structural model estimation

  • After establishing sound measurement, we estimate the structural paths with 5000-draw bootstrapping (two-tailed). Model diagnostics include:
  • Coefficient of determination for Satisfaction and Enrollment Intent.
  • Predictive relevance using blindfolding (omission distance 7).
  • Effect sizes for each exogenous construct (0.02 small, 0.15 medium, 0.35 large—used descriptively).
  • Mediation: indirect effects (e.g., Placements → Satisfaction → Intent) with bias-corrected CIs.
  • Moderation: (a) interaction terms for Income and City Tier on selected paths (e.g., Digital Engagement → Intent), (b) Multi-Group Analysis (NCR vs non-NCR; income bands) to corroborate differences.
  • Importance–Performance Map Analysis (IPMA) for Enrollment Intent to prioritize constructs with high importance/low performance.

 

Operational definitions, items, and roles

To ensure transparency and replicability, Table 3 details operational choices. Most constructs comprise 3–5 reflective items to balance reliability with survey length. Satisfaction and Fit are modeled as single-factor reflective constructs; Enrollment Intent uses 2–3 indicators (apply, enroll, recommend). We keep wording concrete and observable (e.g., “Named recruiters and transparent statistics are available” rather than “Good placements”).

 

Table 3. Measurement specifications and sample indicators (7-point Likert unless noted).

Construct

Role in model

Type

Items (n)

Scale

Sample indicator

Service Quality (SQ)

Exogenous

Reflective

4

1–7

“Admissions queries are answered promptly and clearly.”

Academic Reputation (AR)

Exogenous

Reflective

3

1–7

“This institution is well-regarded among employers.”

Faculty Quality (FQ)

Exogenous

Reflective

3

1–7

“Faculty bring relevant industry/project experience.”

Placement & Linkages (PL)

Exogenous

Reflective

4

1–7

“Named recruiters and transparent placement statistics are available.”

Campus Infrastructure (CI)

Exogenous

Reflective

3

1–7

“Labs, library, and hostels meet modern standards.”

Digital Engagement & Social Proof (DE)

Exogenous

Reflective*

4

1–7

“The website helps me compare programs and outcomes easily.”

Price–Value Perception (PV)

Exogenous

Reflective

3

1–7

“The overall offering is worth the fees asked.”

Location & Accessibility (LA)

Exogenous

Reflective

3

1–7

“Commute and city safety make studying here feasible.”

Word-of-Mouth (WOM)

Exogenous

Reflective

3

1–7

“People I trust recommend this institution.”

Student Satisfaction (SAT)

Mediator

Reflective

3

1–7

“Overall, I am satisfied with this institution.”

Perceived Institutional Fit (FIT)

Mediator

Reflective

3

1–7

“The program aligns with my interests and career goals.”

Enrollment Intent (EI)

Outcome

Reflective

3

1–7

“I intend to apply/enroll at this institution.”

 

Reliability, validity, and bias controls

To mitigate common method variance, we: (i) varied scale anchors across sections, (ii) separated predictor and criterion blocks with buffer items, (iii) included a neutral marker construct, and (iv) assured respondents of anonymity to reduce evaluation apprehension. Post-hoc, we examine single-factor dominance, marker-adjusted regression, and full collinearity VIFs. Nonresponse bias is assessed by comparing early vs late respondents on key constructs. Social desirability signals are monitored via a short 5-item check (not used in the main model; only for sensitivity analysis).

RESULTS AND DISCUSSION

Sample and context checks.

We analyzed 612 valid responses spanning eight cities and a balanced mix of students and parents. The distribution across city, decision role, gender, and income bands confirms broad coverage of the intended population and adequate subgroup sizes for multi-group analysis. In particular, Ghaziabad/Noida (NCR) is slightly oversampled to enable city-tier contrasts, while non-NCR cities remain well represented. See Table 4 for the full profile and counts by city and demographics. This balance supports the moderation tests reported later (NCR vs non-NCR).

 

Table 4. Sample profile (N = 612).

City

n

Role: Student

Role: Parent

Male

Female

Household income ≤₹4L

₹4–10L

≥₹10L

Lucknow

78

56

22

41

37

23

39

16

Kanpur

74

54

20

40

34

24

36

14

Varanasi

76

55

21

39

37

25

36

15

Prayagraj

70

51

19

38

32

22

34

14

Agra

72

52

20

41

31

23

35

14

Meerut

70

50

20

39

31

22

34

14

Ghaziabad/Noida (NCR)

96

71

25

49

47

18

49

29

Bareilly

76

55

21

41

35

26

35

15

Total

612

444

168

328

284

183

298

131

 

Measurement quality and invariance.

The reflective measurement model clears all standard thresholds (loadings, CR, AVE) with comfortable margins and acceptable discriminant validity. Across constructs, Composite Reliability is ≥ 0.83 and AVE ≥ 0.60; the maximum HTMT observed is 0.83, below the conservative 0.85 guideline. These outcomes, summarized in Table 5, justify proceeding to structural estimation. We also verified that modeling Digital Engagement reflectively (baseline) or as a formative composite (channels as indicators) does not change substantive conclusions; weights and VIF were acceptable in the formative check (not tabulated). Configural and compositional invariance requirements were met for city-tier groups, enabling meaningful comparison of structural paths in the multi-group analysis reported later (see Table 7, panel b).

 

Table 5. Measurement model summary (reflective constructs).

Construct

Items

Loading range

CR

AVE

Max HTMT with others

Service Quality (SQ)

4

0.72–0.85

0.88

0.65

0.79

Academic Reputation (AR)

3

0.74–0.86

0.87

0.69

0.81

Faculty Quality (FQ)

3

0.71–0.84

0.85

0.60

0.77

Placement & Linkages (PL)

4

0.73–0.88

0.90

0.69

0.83

Campus Infrastructure (CI)

3

0.72–0.83

0.85

0.65

0.76

Digital Engagement (DE)

4

0.71–0.84

0.88

0.62

0.80

Price–Value (PV)

3

0.74–0.86

0.87

0.69

0.78

Location & Accessibility (LA)

3

0.70–0.82

0.83

0.62

0.74

Word-of-Mouth (WOM)

3

0.72–0.83

0.85

0.66

0.73

Student Satisfaction (SAT)

3

0.78–0.87

0.89

0.73

0.72

Perceived Institutional Fit (FIT)

3

0.76–0.86

0.88

0.70

0.75

Enrollment Intent (EI)

3

0.77–0.88

0.90

0.74

0.70

 

Figure 1: Importance–Performance Map

 

Structural model: direct effects and explanatory power.

The structural model explains a substantial share of variance in both mediators and the outcome:  and , with predictive relevance  and  (cross-validated redundancy). Five predictors have significant direct effects on Enrollment Intent (EI): Satisfaction , Perceived Fit , Price–Value , Digital Engagement , and Word-of-Mouth ; all p < 0.001 (see Table 6). A visual summary appears in Figure 2, which makes clear a two-track dynamic: a relationship track (Satisfaction and Fit) that consolidates multiple quality cues into confidence, and a signal/friction track (Price–Value, Digital Engagement, WOM) that shapes credibility and ease at the decision moment. Taken together, these paths suggest that elevating experienced quality (what prospects feel and see during interactions) boosts intent at least as much as increasing exposed information (what they read online or hear socially)—though both are material.

 

Table 6. Structural model results and model fit (bootstrapped, 5,000 draws).

Path to EI

β

t-value

p-value

Satisfaction (SAT) → EI

0.36

9.12

<0.001

Perceived Fit (FIT) → EI

0.28

7.45

<0.001

Price–Value (PV) → EI

0.22

5.96

<0.001

Digital Engagement (DE) → EI

0.18

4.83

<0.001

Word-of-Mouth (WOM) → EI

0.14

3.67

<0.001

 

Model summary

Value

 (Satisfaction)

0.63

 (Enrollment Intent)

0.59

 (Satisfaction)

0.41

 (Enrollment Intent)

0.38

 

Figure 2: Path Coefficients to Enrollment Intent

 

Mediation: how upstream quality becomes intent.

We next decomposed effects of three upstream levers—Placements (PL), Academic Reputation (AR), and Service Quality (SQ)—into direct impacts on EI and indirect impacts mediated by Satisfaction, Price–Value, and Fit. As shown in Figure 3, the indirect components dominate for all three: PL indirect = 0.21 vs direct = 0.12; AR indirect = 0.18 vs direct = 0.08; SQ indirect = 0.15 vs direct = 0.05 (the latter marginal). Table 7 (panel a) reports the corresponding t-statistics and p-values, indicating robust partial mediation for PL and AR and predominantly indirect influence for SQ. The managerial takeaway is clear: simply asserting strong placements or reputation is not sufficient—those signals must be made legible in ways that improve Satisfaction and Fit (e.g., transparent placement dashboards, named recruiter tie-ups, curriculum–industry alignment, credible faculty profiles, and responsive counseling). When these levers raise Satisfaction and Fit, the lift in EI is materially larger than the direct effect of the claim itself.

 

Table 7. Mediation and multi-group highlights (from Figures 3–4).

(a) Mediation results (indirect via SAT/PV/FIT versus direct to EI)

Effect

Indirect (via SAT/PV/FIT)

Direct

Inference

PL → EI

0.21 (t=7.8, p<0.001)

0.12 (t=3.9, p<0.001)

Partial mediation; strengthen dashboards, named recruiters

AR → EI

0.18 (t=6.5, p<0.001)

0.08 (t=2.7, p=0.007)

Partial mediation; foreground program-level reputation & fit

SQ → EI

0.15 (t=5.9, p<0.001)

0.05 (t=1.9, p=0.058)

Predominantly indirect; improve counseling and response SLAs

 

(b) Multi-Group Analysis (NCR vs Non-NCR)

Path

NCR β

Non-NCR β

Group difference

DE → EI

0.24

0.14

p-diff < 0.05

PV → EI

0.18

0.25

p-diff < 0.05

WOM → EI

0.12

0.15

n.s.

 

Figure 3: Direct vs Indirect (Mediated) Effects on Enrollment Intent

 

Strategic priorities: Importance–Performance Map.

To translate coefficients into action, we mapped importance (total effect on EI) against performance (perceived 0–100 scores). Figure 1 shows that Placements (importance = 0.62; performance = 64) and Price–Value (0.54; 52) fall into the high-importance / mid-to-low performance quadrant—prime candidates for near-term investment. Digital Engagement (0.48; 58) also shows high leverage with moderate performance, suggesting quick wins via program-page depth, fee clarity FAQs, structured outcomes content, and authentic student/alumni reviews. Service Quality (0.40; 60) and Location (0.22; 63) sit to the right of the performance median; they warrant sustain strategies unless city diagnostics reveal pockets of weakness. In short, the IPMA directs resources toward placements transparency, affordability communication (PV), and digital credibility as the highest-return levers (see Figure 1).

 

City-tier heterogeneity (NCR vs non-NCR).

Multi-group analysis reveals patterned differences aligned with media habits and affordability constraints. As plotted in Figure 4 and summarized in Table 7 (panel b), Digital Engagement → EI is stronger in NCR  than in non-NCR , consistent with heavier online research and greater reliance on digital reviews and comparison pages in NCR corridors (p-diff < 0.05). Conversely, Price–Value → EI is stronger in non-NCR  than NCR , reflecting tighter affordability constraints and higher salience of scholarships/EMIs and fee transparency (p-diff < 0.05). Differences for WOM → EI are small and not significant (0.12 vs 0.15). These splits motivate city-tier-specific plays: NCR campaigns should emphasize findability and depth of digital content (structured outcomes, guided comparisons, verifiable social proof), while non-NCR campaigns should lead with fee clarity, financing options, and safety/hostel information, reinforcing value.

 

Figure 4: Multi-Group Analysis (NCR vs Non-NCR) for Key Paths

 

Robustness and auxiliary diagnostics.

Blindfolding shows  > 0 for both mediators and EI (Table 6), indicating out-of-sample predictive relevance. Full collinearity VIFs remained below conventional cutoffs, and a marker-variable check did not alter substantive path inferences (not tabulated), suggesting common-method variance is unlikely to bias the main pattern. Sensitivity checks using a formative Digital Engagement specification yielded the same rank order of key paths, reinforcing the stability of findings across operationalizations. Finally, removing any single city from the sample (leave-one-city-out) did not reverse the sign or significance of the principal EI predictors (results consistent with Figure 2).

 

Managerial implications.

The evidence points to three actionable thrusts:

Make quality verifiable. Because placements and reputation lift EI primarily through Satisfaction and Fit (see Table 7, panel a; Figure 3), institutions should publish auditable dashboards (named recruiters, internship catalogs, standardized outcome summaries) and faculty/mentor profiles that connect curricula to industry. These assets convert generic claims into confidence.

 

Reduce friction at the bottom of the funnel. The sizable Price–Value and Digital Engagement paths to EI (Table 6) and their IPMA positions (Figure 1) argue for fee transparency (clear breakdowns, TCO calculators, scholarship/EMI explainers) and task-oriented program pages (structured curriculum, outcomes, FAQs). Doing so both improves PV and amplifies the effect of upstream quality stories.

 

Localize by city tier. The NCR advantage for DE → EI and the non-NCR advantage for PV → EI (see Figure 4; Table 7, panel b) recommend content-depth investments for NCR (SEO, comparison tools, credible reviews) and affordability-first messaging for non-NCR (scholarships/EMIs, hostel safety, transport access).

 

Synthesis.

Bringing the strands together, Figure 2 establishes the direct levers of intent; Figure 3 and Table 7 explain how upstream quality creates intent (largely through Satisfaction and Fit); Figure 1 ranks where to invest for the biggest gains; and Figure 4 shows for whom to tailor the narrative. Combined with the strong measurement foundations in Table 5 and the representative sample in Table 4, the results provide a coherent, evidence-based playbook: invest in verifiable employability, transparent value, and credible digital touchpoints, then localize by city tier to align with media habits and budget sensitivities. This quality-first approach strengthens both market performance (higher intent) and institutional reputation, setting the stage for sustained, trust-based growth in Uttar Pradesh’s diverse higher-education market.

CONCLUSION

This study aimed to measure the impact of perceived quality dimensions on the effectiveness of higher-education marketing in selected cities of Uttar Pradesh and, in turn, to translate the qualitative "quality" into tangible levers that admissions teams can control. Using a representative sample from the urban population and a PLS-SEM framework, we demonstrated that enrollment intent is best represented by two complementary pathways: One of these is the relationship path where Satisfaction and Perceived Institutional Fit combine several quality cues into an intention to act. The second is a signal/friction pathway, where Price-Value, Digital Engagement and Word-of-Mouth affect credibility and ease at the point of decision. Untangling the contributions of each of these pathways showed that collectively they explained much of the variance in intent, thus supporting a quality-first conception of marketing over and above a strictly reach-first conception.

 

One main conclusion is that the upstream quality signals - Placements, Academic Reputation and Service Quality - are channelled substantially through Satisfaction, Fit and Price/Value, rather than around them. In practice, veritable claims or profiles of employability or prestige will be of limited direct utility without their being made legible through transparent dashboards, named recruiter tie-ups and curriculum-linkages, credible faculty profiles, and responsive counselling. Importance - Performance map revealed Placements, Price - Value clarity, and Digital Engagement as high-return priorities - they have the biggest impact and still room for improvement. City tier analysis also highlighted the importance of localization - NCR audiences are more receptive to content depth and online credibility, whereas audiences outside the NCR are more receptive to affordability narratives, scholarships/EMIs and hostel safety.

 

For institutional leaders, there are three imperatives: First, make quality measurable: standardize results reporting, publish statistics that can be audited, and make faculty/mentor evidence readily available that links programs to industry practice. Second, make the bottom of the funnel less friction-rich: use fee transparency, total-cost calculators, program pages designed around the task and natural review questions and structured FAQs. Third, be localised: invest in SEO and comparison tools and hygiene in NCR; lead with affordability, safety and accessibility in non-NCR cities. These actions not only increase intent but also snowball over time as reputation and trust are compounded.

 

The study and its limitations It is cross-sectional, self-reported and based on intention not observation of enrolment, and although geographically diverse, is limited to a set of UP cities and may not reflect all institutional archetypes or discipline differences. We controlled for common-method concerns and tested measurement invariance, but latent biases cannot be completely ruled out. Future research should longitudinally follow the full funnel (enquiry -> application -> enrolment), marry perceptions from surveys with administrative outcomes and conduct field experiments (A/B tests of content depth, fee transparency and scholarship framings). External validity would be further developed by considering discipline-specific models, the heterogeneity of effects across genders and first-generation students, and comparisons across states. Finally, behavioral telemetry (call center SLAs, website task completion) and text analytics of reviews can fine-tune the correlation between actions performed at the operational level and perceived quality.

 

In conclusion, the evidence confirms a clear management proposition: quality-first marketing - authentic results, transparent value and credible digital interactions - trumps volume-first campaigns. Institutions that invest in making quality visible, reducing decision frictions and tailoring messages to city-tier realities will not only improve short-term conversion but build durable reputation in Uttar Pradesh's competitive higher-education market.

REFERENCES
  1. Kashyap, Shikha Verma, Avinash Pawar, and Shital Chiddarwar. "Towards sustainable student enrolment: investigating factors influencing private university choices in Chhattisgarh." In Sustainable Smart Technology Businesses in Global Economies, pp. 481-491. Routledge.
  2. Khan, Aiyaz Ahmad, and Sanjeev Kumar Jha. "UNDERSTANDING FACTORS AFFECTING STUDENTS’PREFERENCES FOR SELECTING A CENTRAL UNIVERSITY: A CASE STUDY."
  3. Mishra, Ratnakar, Thamizhselvi M, Veto Dey, Manish Unhale, and André P. Slowak. "New ways of modelling an optimal choice in Indian HE: evidence from admission data." Journal of Asia Business Studies (2025).
  4. GAUR, Mr SUMIT, MANISH GUPTA, and Mr MANISH KUMAR. "IMPACT OF SOCIAL MEDIA PROMOTIONS ON RECRUITMENT OF SECONDARY STUDENTS: A STUDY BASED ON UG PROFESSIONAL COLLEGES IN DELHI-NCR."
  5. Ahmad, Waqar. "Measuring the Fiscal Influence of Higher Education: A Pragmatic Study." International Journal of Social Sciences & Educational Studies 12, no. 1 (2024): 84-99.
  6. Siddiqui, Orooj. "The role of a satisfied teacher in uplifting the education standards: an analytical study to improve the quality standards of management education in India." (2023): 160-166.
  7. Sharma, Khyati, Anchal Garg, Varun Joshi, and Arvind Kumar. "Assessment of health risks for criteria air pollutants present in 11 non-attainment cities of Uttar Pradesh, India." Human and Ecological Risk Assessment: An International Journal 29, no. 1 (2023): 103-122.
  8. Yadav, Uma Shankar, Ravindra Tripathi, Ashish Kumar, and Rajesh Kumar Shastri. "Evaluation of factors affecting women artisans as entrepreneurs in the handicraft sector: a study on financial, digital technology factors and developmental strategies about ODOP in Uttar Pradesh to boost economy." Journal of the Knowledge Economy (2024): 1-54.
  9. Kumar, Ram Pravesh, Aafreen Jahan, Ranjit Singh, Pradeep Kumar, Rajesh Bag, Rajeev Bhatla, Balram Ambade, and Umesh Chandra Dumka. "Spatio-temporal analysis of air pollution and meteorological influences in western Uttar Pradesh using Geospatial techniques: insights for policy and management." International Journal of Remote Sensing (2025): 1-28.
  10. Prakash, Ved, and Shubham Pratap Singh. "Service Quality and Customer Satisfaction in Rural Public Sector Banks: An Empirical Analysis in Lucknow District, Uttar Pradesh, India." (2023).
  11. Hasan, Nazia, Anjani Kumar Singh, Manoj Kumar Agarwal, and Bijay Prasad Kushwaha. "Evaluating the role of microfinance institutions in enhancing the livelihood of urban poor." Journal of Economic and Administrative Sciences 41, no. 1 (2025): 114-131.
  12. Nayak, Sanatan, and Surendra Singh Jatav. "Are livelihoods of slum dwellers sustainable and secure in developing economies? Evidences from Lucknow, Uttar Pradesh in India." Heliyon 9, no. 9 (2023).
  13. Mishra, Dr Rashmi, and Deepika Varshney. "Digital transformation (Dt): Promoting growth and efficiency in Uttar Pradesh organized retailing." International Journal for Research in Engineering Application & Management (IJREAM) 10, no. 01 (2024): 32-37.
  14. Bera, Om Prakash, U. Venkatesh, Gopal Krushna Pal, Siddhant Shastri, Sayantan Chakraborty, Ashoo Grover, and Hari Shanker Joshi. "Assessing the impact of the National Clean Air Programme in Uttar Pradesh's non-attainment cities: a prophet model time series analysis." The Lancet Regional Health-Southeast Asia 30 (2024).
  15. Dambhare, Ankit, and Varinder Singh Rana. "Analysing the influence of websites quality and content on leisure travellers star category hotel selection in the state of Uttar Pradesh." International Journal of Indian Culture and Business Management 28, no. 1 (2023): 24-56.
  16. Rai, Vinod Kumar, R. S. Rai, and Ashok Sharma. "A Systematic Review of the Relationship between Service Quality, Customer Satisfaction, and Brand Loyalty with special reference to Selected Dairy Companies in Delhi NCR." Pacific Business Review International 16, no. 11 (2024).
  17. Meena, Bharat Lal, Suhail Ahmad Khan, and Vivek Srivastava. "ASSESSING CONSUMERS'KNOWLEDGE AND PURCHASE WILLINGNESS FOR GI-CERTIFIED MALIHABADI DUSSEHRI MANGOES." International Journal of Agricultural & Statistical Sciences 21, no. 1 (2025).
  18. Meena, Bharat Lal, Suhail Ahmad Khan, and Vivek Srivastava. "ASSESSING CONSUMERS'KNOWLEDGE AND PURCHASE WILLINGNESS FOR GI-CERTIFIED MALIHABADI DUSSEHRI MANGOES." International Journal of Agricultural & Statistical Sciences 21, no. 1 (2025).
  19. Singh, Harpreet, Avadhesh Kumar Meena, Sanjana Sharma, and Vineet Kumar. "Assessing Environment Sustainability in Urban Slums: A Case Study of Amarpur Batlohiya Slum in Varanasi City, Uttar Pradesh, India." Current World Environment 20, no. 1 (2025): 463.
Recommended Articles
Research Article
Generational Differences in Smartphone Purchase Behavior: A Comparative Analysis of Social Media Marketing Strategies
Published: 15/09/2025
Research Article
Smart Investing: Understanding Participant Default Investment Allocations in Retirement Plans
Published: 15/09/2025
Research Article
Strategic Co-Branding between Indian Men's and Women's Football Leagues: Opportunities for Equity, Visibility, and Commercial Growth
...
Published: 15/09/2025
Research Article
Strategic International Expansion as a Mechanism for Mitigating Government Influence: A Marketing Perspective
Published: 15/09/2025
Loading Image...
Volume 2, Issue 4
Citations
16 Views
11 Downloads
Share this article
© Copyright Advances in Consumer Research