This study examines how online review valence and average star ratings relate to consumer trust and, in turn, to purchase intention in e-commerce within Kerala. Using a cross-sectional survey of adult online shoppers who had purchased in the previous six months, a structured questionnaire captured perceptions of review valence/helpfulness, average star ratings encountered, consumer trust, and purchase intention on five-point Likert scales. Data screening removed incomplete and patterned responses; negatively keyed items were reverse-coded, and construct scores were computed as means. Analyses were conducted in EDUSTAT, reporting reliability, correlations, regression, and mediation. Results indicate that more positive review valence and higher average star ratings are each strongly associated with greater consumer trust. Trust, in turn, shows a large positive relationship with purchase intention. Mediation tests reveal significant indirect effects of both review valence and average star ratings on purchase intention via trust, alongside smaller direct effects, consistent with partial mediation. The findings highlight trust as the central mechanism linking review and rating signals to intention and suggest practical avenues for platforms and sellers: elevate review quality and recency, present ratings with credibility cues, and strengthen visible protections around returns, security, and customer support.
Consumer judgements in e-commerce increasingly hinge on user-generated signals—especially the sentiment of textual reviews and the levels summarised by average star ratings—which reduce information asymmetry and help shoppers manage risk in credence-laden categories (Chevalier & Mayzlin, 2006; Dellarocas, 2003; Duan et al., 2008; Floyd et al., 2014; Hennig-Thurau et al., 2004). A large empirical literature shows that such electronic word of mouth (eWOM) not only correlates with sales outcomes but also shapes pre-purchase perceptions that guide evaluation and choice (Chen & Xie, 2008; Mudambi & Schuff, 2010). Within Kerala, where online retail adoption is widespread, the specific pathways by which review valence and average star ratings translate into purchase intention through trust remain under-documented in concise, journal-length studies.
Trust is central to online buying because transactions occur without physical inspection or face-to-face assurance. Established models integrate trust with technology-acceptance and risk beliefs to explain intention formation in online contexts (Gefen et al., 2003; McKnight et al., 2002; Pavlou, 2003). In parallel, attitude–intention frameworks posit that behavioural intention is shaped by evaluative beliefs and normative considerations, offering a coherent basis to examine how trust mediates the influence of informational cues on intention (Ajzen, 1991). Taken together, these strands suggest a trust-centred mechanism: favourable review valence and higher average star ratings should elevate trust, which in turn should raise purchase intention.
Background
Dual-process persuasion theory clarifies why reviews and ratings operate as complementary cues. Detailed, diagnostic reviews invite central processing when consumers seek product-specific evidence, whereas average star ratings provide a fast, peripheral heuristic that can anchor impressions and narrow options (Petty & Cacioppo, 1986; Mudambi & Schuff, 2010). At the same time, the rating environment is imperfect—distributions can be J-shaped and sensitive to early entries—so consumers often integrate multiple signals (valence, volume, recency, visuals) to form robust expectations (Hu et al., 2009). Against this backdrop, the present study focuses on two widely visible signals—review valence/helpfulness and average star ratings—and tests whether their effects on purchase intention are transmitted primarily through consumer trust. Methodologically, mediation is assessed using established indirect-effect procedures suitable for compact survey designs (Preacher & Hayes, 2008; Sobel, 1982). The contribution is twofold: it isolates trust as the keystone mechanism linking review and rating signals to intention, and it provides Kerala-specific evidence using concise measures that are practical for journal publication.
Research Questions
Research Objectives
Hypotheses
The study adopted a cross-sectional, quantitative survey design to examine how signals from online reviews and ratings relate to consumer trust and purchase intention in e-commerce. The empirical setting was Kerala, and the unit of analysis was the individual consumer with recent online shopping experience. A structured self-administered questionnaire captured perceptions of review valence, average star ratings, consumer trust, and purchase intention, enabling estimation of association and mediation effects within a single wave of data.
The population comprised adult residents of Kerala who had purchased from any e-commerce platform at least once in the preceding six months. A pragmatic quota strategy ensured coverage across the state’s three regions (South, Central, North) and across usage frequency (low: ≤1 order/month; medium/high: ≥2 orders/month). Respondents were recruited through Kerala-focused social media, messaging groups, and platform communities. Screening verified residence, age eligibility, and recent purchase history. Participation was voluntary, and informed consent preceded access to the questionnaire. A total of 200 valid responses were analysed after exclusions for incompleteness, duplicates, and patterned responding.
The instrument used 5-point Likert items (1 = strongly disagree to 5 = strongly agree). Four constructs were measured with concise multi-item scales: Online Review Valence/Helpfulness (eight items), Average Star Ratings as encountered for products considered (eight items), Consumer Trust (eight items), and Purchase Intention (eight items). A small number of negatively keyed items captured the inverse direction of the constructs and were reverse-coded prior to scoring. Items were adapted to the e-commerce context for clarity and brevity suitable for a short paper. Content adequacy and face clarity were verified through expert review, and minor wording refinements improved readability before fielding.
Data collection proceeded online. One attention-check item and a minimum reasonable completion-time flag supported quality control. Responses with missing pages, straight-lining across long stretches, or duplicate device/email identifiers were removed. After reverse-coding, composite scores for each construct were computed as the mean of their respective items so that higher values consistently indicated more favourable perceptions. Likert-scale means were treated as approximately continuous for inferential analysis.
Analyses were carried out in EDUSTAT. Descriptive statistics (mean, median, mode, standard deviation, skewness, kurtosis) summarised construct distributions and sample characteristics. Hypothesis testing followed the study aims: bivariate Pearson correlations estimated the associations of review valence and average star ratings with consumer trust; a simple ordinary least squares regression estimated the effect of trust on purchase intention; and mediation was examined with trust as the mediator between each review/rating signal and purchase intention. Indirect effects were evaluated using the Sobel test and percentile bootstrap confidence intervals. Two-tailed tests with α = .05 were used, and effect sizes with confidence intervals accompanied p-values to aid interpretation.
Ethical principles guided all procedures. The questionnaire avoided personally identifying information beyond what was strictly necessary for data integrity checks, and results were analysed and reported in aggregate to preserve anonymity. The cross-sectional design, reliance on self-reports, and non-probability sampling delimit causal inference and statistical generalisability; these constraints align with the scope of a concise, journal-length study centred on Kerala e-commerce consumers.
Data Analysis and Interpretation
Analyses were carried out in EDUSTAT on data from Kerala e-commerce consumers (n = 200). Multi-item Likert responses were reverse-coded where required and averaged to form four composites: Online Reviews—Valence & Helpfulness (ORVH), Average Star Ratings Encountered (ASR), Consumer Trust (TRST), and Purchase Intention (PI). Two-tailed tests with α = .05 were used. Likert means were treated as approximately continuous.
Table 1 Distribution of Respondents by Region (n = 200)
|
Region |
Count |
Percent |
|
Central |
70 |
35.0 |
|
North |
60 |
30.0 |
|
South |
70 |
35.0 |
The sample is region-balanced within Kerala, with Central and South contributing equally (35.0% each) and North slightly lower (30.0%). This distribution supports comparability across regions, though inferences for the North should be read with modest caution due to its relatively smaller share.
Table 2 Distribution of Respondents by Gender (n = 200)
|
Gender |
Count |
Percent |
|
Female |
89 |
44.5 |
|
Male |
109 |
54.5 |
|
Other / Prefer not to say |
2 |
1.0 |
The sample shows a slight male majority (54.5%) relative to female respondents (44.5%), with very limited representation in the “Other / Prefer not to say” category (1.0%). This near-balance supports male–female descriptive comparisons; however, inferential tests involving the third category are not advisable due to the extremely small cell size.
Table 3 Distribution of Respondents by Age band (n = 200)
|
Age band |
Count |
Percent |
|
18–24 |
39 |
19.5 |
|
25–34 |
82 |
41.0 |
|
35–44 |
45 |
22.5 |
|
45+ |
34 |
17.0 |
The sample is concentrated in younger and mid-career cohorts, with the 25–34 band forming the largest group (41.0%), followed by 35–44 (22.5%). The 18–24 (19.5%) and 45+ (17.0%) segments are smaller. This age distribution aligns with typical e-commerce usage patterns and provides reasonable power for analyses focused on the 25–44 range, while comparisons involving the 45+ group should be interpreted cautiously due to its relatively smaller share.
Table 4 Distribution of Respondents by Orders per month (n = 200)
|
Orders per month |
Count |
Percent |
|
Low (≤1 per month) |
80 |
40.0 |
|
Medium/High (≥2 per month) |
120 |
60.0 |
A majority of respondents report placing two or more online orders per month (60.0%), indicating an active user base, while a substantial minority order infrequently (40.0%). This mix offers variability for analysing behaviour by usage intensity, though findings may tilt toward patterns typical of more experienced shoppers.
Table 5 Distribution of Respondents by Primary platform (n = 200)
|
Primary platform |
Count |
Percent |
|
AJIO |
12 |
6.0 |
|
Amazon |
84 |
42.0 |
|
Flipkart |
67 |
33.5 |
|
Meesho |
15 |
7.5 |
|
Myntra |
22 |
11.0 |
Platform usage is concentrated in two players—Amazon (42.0%) and Flipkart (33.5%)—which jointly account for three-quarters of the sample. The remaining quarter is spread across fashion-oriented platforms (Myntra 11.0%, AJIO 6.0%) and value-focused Meesho (7.5%). This distribution supports platform-level analyses centred on Amazon and Flipkart, while comparisons involving the smaller platforms should be interpreted cautiously due to limited cell sizes.
Table 6 Descriptive statistics of study constructs
|
Construct |
N |
Mean |
Median |
Mode |
SD |
Skewness |
Kurtosis |
|
ORVH (Online Reviews—Valence & Helpfulness) |
200 |
3.14 |
3.19 |
4.38 |
1.13 |
−0.11 |
−1.16 |
|
ASR (Average Star Ratings Encountered) |
200 |
3.12 |
3.12 |
4.62 |
1.11 |
−0.09 |
−1.14 |
|
TRST (Consumer Trust) |
200 |
2.79 |
2.75 |
2.75 |
0.94 |
0.12 |
−0.76 |
|
PI (Purchase Intention) |
200 |
2.78 |
2.75 |
2.50 |
0.96 |
0.11 |
−0.90 |
On the 1–5 Likert scale, perceptions of online reviews and ratings are modestly positive (ORVH mean = 3.14; ASR mean = 3.12), whereas consumer trust and purchase intention sit just below the neutral midpoint (TRST mean = 2.79; PI mean = 2.78), indicating scope to strengthen trust and conversion. Dispersion is moderate (SD ≈ 0.94–1.13), suggesting adequate variability for inference. Medians closely track means, and skewness values hover near zero (slightly negative for ORVH/ASR; slightly positive for TRST/PI), implying approximately symmetric distributions. Negative kurtosis across constructs (platykurtic) points to lighter tails than normal. Overall, the shape and spread of scores are suitable for correlation, regression, and mediation analyses using composite means.
Table 7 Correlation matrix among constructs (Pearson r with p-values, n = 200)
|
ORVH_mean |
ASR_mean |
TRST_mean |
PI_mean |
|
|
ORVH_mean |
1.000 |
0.515 (p=5.72e-15) |
0.766 (p=7.61e-40) |
0.765 (p=9.31e-40) |
|
ASR_mean |
0.515 (p=5.72e-15) |
1.000 |
0.787 (p=1.93e-43) |
0.768 (p=3.18e-40) |
|
TRST_mean |
0.766 (p=7.61e-40) |
0.787 (p=1.93e-43) |
1.000 |
0.893 (p=9.35e-71) |
|
PI_mean |
0.765 (p=9.31e-40) |
0.768 (p=3.18e-40) |
0.893 (p=9.35e-71) |
1.000 |
All pairwise associations are positive and highly significant (p < .001). The strongest relationship is between Consumer Trust (TRST) and Purchase Intention (PI) (r = .893), indicating that intention to purchase is closely tied to trust. Online Review Valence/Helpfulness (ORVH) and Average Star Ratings Encountered (ASR) each show strong correlations with Trust (r = .766 and r = .787, respectively), supporting H1 and H2. ORVH and ASR also relate strongly to Purchase Intention (r ≈ .77), suggesting that review and rating signals are consequential for buying decisions. The ORVH–ASR correlation is moderate (r = .515), implying related but nonredundant constructs and little risk of severe multicollinearity if modelled together. These patterns motivate regression and mediation analyses with Trust as the key pathway to Purchase Intention.
Table 8 Hypotheses H1–H2 (bivariate tests with 95% CIs, two-tailed)
|
Hypothesis |
r |
95% CI |
p-value |
n |
|
H1: ORVH ↔ TRST |
0.766 |
[0.702, 0.818] |
< .001 |
200 |
|
H2: ASR ↔ TRST |
0.787 |
[0.728, 0.835] |
< .001 |
200 |
Both hypotheses are supported. Online Review Valence/Helpfulness shows a strong positive association with Consumer Trust (r = 0.766), and Average Star Ratings Encountered shows an equally strong positive association with Consumer Trust (r = 0.787). In both cases, the 95% confidence intervals are entirely above zero and p-values are < .001, indicating precise and robust effects. Substantively, consumers perceiving reviews as more positive/helpful and encountering stronger rating signals tend to report higher trust. As these are zero-order correlations from cross-sectional data, they evidence association rather than causation.
Table 9 Hypothesis H3 (simple regression of Purchase Intention on Trust)
|
Predictor |
b (slope) |
SE(b) |
95% CI for b |
t |
p-value |
R² |
n |
|
TRST_mean |
0.910 |
0.033 |
[0.847, 0.974] |
27.99 |
< .001 |
0.798 |
200 |
The simple OLS model shows that Consumer Trust is a strong, positive predictor of Purchase Intention. A one-unit increase in TRST_mean is associated with a 0.910-unit increase in PI_mean (95% CI [0.847, 0.974]). The effect is highly significant (t = 27.99, p < .001) and the model explains a large share of variance (R² = 0.798), indicating that trust accounts for most of the observed differences in purchase intention. While results are consistent with theory, the cross-sectional design supports association rather than causal inference.
Table 10 Hypothesis H4 (mediation by Trust; Sobel test and bootstrap CI for indirect effect)
|
Model |
a (X→M) |
b (M→Y|X) |
c (total) |
c′ (direct) |
Indirect a×b |
Sobel z |
Sobel p |
Boot 95% CI |
n |
|
H4a: ORVH → Trust → PI |
0.641 |
0.757 |
0.652 |
0.167 |
0.485 |
11.40 |
< .001 |
[0.406, 0.571] |
200 |
|
H4b: ASR → Trust → PI |
0.670 |
0.773 |
0.666 |
0.148 |
0.518 |
11.53 |
< .001 |
[0.430, 0.607] |
200 |
The mediation tests indicate that Consumer Trust carries a substantial part of the impact of both signals—Online Review Valence/Helpfulness (ORVH) and Average star ratings—on Purchase Intention (PI). For ORVH, the indirect effect is large and significant (a×b = 0.485; Sobel z = 11.40, p < .001) with a bootstrap 95% CI [0.406, 0.571] excluding zero; the direct effect remains positive after accounting for Trust (c′ = 0.167), implying partial rather than full mediation. For Average star ratings, the indirect effect is likewise large and significant (a×b = 0.518; Sobel z = 11.53, p < .001; bootstrap 95% CI [0.430, 0.607]), with a smaller but positive direct effect (c′ = 0.148), again indicating partial mediation. In proportional terms, Trust transmits roughly three-fourths of the total effect for ORVH (≈74%) and for Average star ratings (≈78%), underscoring Trust as the primary pathway from reviews/ratings to intention. Given the cross-sectional design, these results support robust indirect associations but do not establish causal mediation.
This study set out to examine how signals available at the point of online evaluation—review valence and average star ratings—relate to consumer trust and, in turn, to purchase intention among Kerala e-commerce consumers. The empirical pattern across is internally consistent and theoretically coherent: consumers who encounter more positive reviews and stronger rating signals tend to report higher trust, and trust is the primary route through which these signals translate into willingness to purchase.
First, the strong positive association between review valence/helpfulness and trust (H1) suggests that narrative feedback continues to play a central role in shaping confidence. Reviews clarify product fit, reduce ambiguity, and offer situational detail (e.g., photos, usage context) that consumers in Kerala appear to treat as credible cues of reliability. The size and precision of the correlation, together with narrow confidence intervals, indicate that this is not a marginal effect but a substantive one.
Second, higher average star ratings are also strongly associated with trust (H2). Ratings are compact summary heuristics; they compress dispersed experiences into a single metric that consumers can process quickly. The results indicate that this compressed signal is not merely convenient—it is consequential for trust formation. Notably, the correlation between review valence and ratings is only moderate, implying that reviews and ratings provide partly distinct information; together they reduce uncertainty more than either does alone.
Third, trust shows a large, positive effect on purchase intention (H3), explaining a substantial share of its variance. This positions trust as the keystone attitudinal mechanism: even when consumers perceive favourable reviews and ratings, it is their resultant confidence in the platform and sellers that most directly propels intention to purchase. Practically, interventions that elevate trust are likely to yield outsized gains in conversion relative to efforts that only raise awareness or generate traffic.
Fourth, the mediation tests (H4) reveal that trust transmits a substantial part of the influence of both review valence and average ratings to purchase intention. The indirect effects are large and statistically robust, while the direct paths remain positive but smaller—evidence of partial mediation. This pattern aligns with a two-stage decision process: consumers first update their trust based on review and rating signals, and then translate that trust into purchase intention; at the same time, some residual, direct influence of these signals on intention persists (e.g., a very high rating may nudge purchase even before trust is fully formed).
Overall, the results portray a coherent trust-centred mechanism in which textual reviews and numeric ratings function as complementary, nonredundant cues. The consistency across bivariate associations, regression, and mediation strengthens confidence in the findings within the limits of the design.
The study’s boundaries should be noted. The cross-sectional design supports association, not causal claims; experimental or longitudinal designs could more directly establish temporal precedence. The non-probability, region-quota sampling frames Kerala’s consumer base well enough for a short paper, but statistical generalisability beyond similar contexts should be made cautiously. Measures rely on self-reports and composites that treat Likert means as approximately continuous; while standard, this approach may smooth within-item nuances. Platform-level and category-level heterogeneity (e.g., electronics versus apparel) were not modelled here and could moderate effects.
Future work can extend these results by manipulating review valence and rating levels in controlled experiments, modelling platform/category moderators, and incorporating additional credibility signals (review volume, rater profiles, verified purchase badges). Nonetheless, within the present scope, the findings clearly indicate that improving the quality and clarity of review content and ensuring robust, well-calibrated rating signals are likely to raise trust—and through trust, purchase intention—among Kerala e-commerce consumers.
Implications of the Study
The results position consumer trust as the keystone linking review valence and average star ratings to purchase intention. Platform design therefore prioritises trust-building features. Reviews benefit from emphasis on recency, depth, and relevance: default sorting by “most recent” or “most helpful,” prompts that nudge buyers to report use context and product fit, and friction-free photo/video uploads. Helpful-vote mechanisms and summarised pros–cons sections make review signals clearer and reduce ambiguity.
Rating signals work best when credibility cues accompany the average. Displaying the average alongside rating count, distribution histograms, and a clear “verified purchase” badge improves diagnosticity. Time stamps and category-specific benchmarks (e.g., “4.3 vs category median 4.0”) prevent over- or under-weighting. Avoid presenting very high averages with very low counts without a cautionary cue; transparency sustains trust.
Trust infrastructure remains decisive. Prominent, plain-language policies on returns, refunds, and warranties; visible payment and data-security assurances; and fast, trackable customer support reduce perceived risk. Localised support (Malayalam interface/help content) and reliable last-mile logistics reinforce confidence that products arrive as described and issues receive fair resolution.
Seller practices shape both signals and trust. Post-purchase review requests that focus on authenticity (not only positivity), timely and courteous public responses to negative feedback, and visible quality-assurance steps (e.g., size guides, compatibility notes) reduce mismatches between description and delivery—directly strengthening trust and indirectly lifting intention.
Content governance matters. Proactive detection of inauthentic reviews/ratings, clear disclosure of incentivised content, and penalties for manipulation protect the informational environment. Aligning platform policies with widely accepted consumer-protection norms and communicating these safeguards to users enhances perceived integrity.
For marketing and merchandising, trust metrics become operating KPIs. Campaigns that surface authentic reviews, verified-buyer quotes, and rating distributions outperform generic creatives. A/B tests on review layout, summary snippets, and trust badges quantify lift along the trust → intention pathway indicated by the mediation results.
Segmentation yields actionable focus. Medium/high-frequency shoppers already contribute strong signals; targeted interventions for low-frequency shoppers—such as clearer returns messaging or first-purchase guarantees—address the groups with greater residual uncertainty. Platform- and category-level dashboards that track review quality, average star ratings, trust, and conversion by cohort enable continuous optimisation.
For research and analytics, the findings justify routine reporting of indirect effects alongside direct effects in e-commerce studies. Future work in this stream benefits from experimental manipulations of review valence and average rating levels, category-specific analyses, and longitudinal designs to establish temporal ordering. Measurement that separately captures “average star ratings encountered” and “perceived average star ratings” clarifies mechanisms without inflating multicollinearity.
The study shows that signals at the point of evaluation—more positive online review valence and higher average star ratings—are strongly associated with greater consumer trust, and that trust, in turn, is a powerful predictor of purchase intention among Kerala e-commerce consumers. Mediation analyses indicate that trust carries a substantial portion of the impact of reviews and ratings on intention, while smaller direct effects remain, consistent with partial mediation. These findings highlight trust as the central mechanism linking review and rating signals to conversion, underscoring the value of improving review quality, ensuring credible rating displays, and strengthening returns, security, and support policies. The conclusions rest on cross-sectional, self-reported data from a non-probability Kerala sample, so causal claims and broad generalisation warrant caution. Even so, the pattern is coherent and actionable for platform design and seller practices focused on building trustworthy informational environments.