Can Marginal R2 Exceed Conditional R2 in a GLMM?
In Cross Validated, evaluating the goodness-of-fit for Generalized Linear Mixed Models (GLMMs) often involves the Nakagawa and Schielzeth approach. This method provides two distinct metrics: Marginal $R^2$ ($R^2_m$) and Conditional $R^2$ ($R^2_c$). A common question for researchers in 2026 is whether the variance explained by the fixed effects can ever exceed the total variance explained by the entire model.
1. Defining the R-Squared Metrics
To understand the hierarchy of these values, we must look at what they represent mathematically in a framework:
- Marginal $R^2$ ($R^2_m$): Represents the proportion of variance explained by the fixed effects only.
- Conditional $R^2$ ($R^2_c$): Represents the proportion of variance explained by both fixed and random effects combined.
2. The Mathematical Hierarchy
The short answer is: No, in a correctly specified model, Marginal $R^2$ cannot be greater than Conditional $R^2$.
The logic follows the additive nature of variance in a GLMM. The formula for these metrics is generally structured as follows:
$$R^2_m = \frac{\sigma^2_f}{\sigma^2_f + \sigma^2_r + \sigma^2_e + \sigma^2_d}$$
$$R^2_c = \frac{\sigma^2_f + \sigma^2_r}{\sigma^2_f + \sigma^2_r + \sigma^2_e + \sigma^2_d}$$
Where:
- $\sigma^2_f$ is the variance of fixed effects.
- $\sigma^2_r$ is the variance of random effects.
- $\sigma^2_e$ is the residual variance.
- $\sigma^2_d$ is the distribution-specific variance (inherent to GLMMs).
Since $\sigma^2_r$ (random effect variance) must be $\geq 0$, it is mathematically impossible for the numerator of $R^2_m$ to be larger than the numerator of $R^2_c$. At most, if the random effect variance is zero, $R^2_m$ will equal $R^2_c$.
3. Why Might Your Software Report Weird Values?
If you see a result where $R^2_m > R^2_c$ in 2026 statistical software, it usually indicates one of the following issues:
- Singular Fit: The model has failed to estimate the random effects properly, often setting their variance to near-zero.
- Software Glitch: In some older versions of R packages (like
MuMInorperformance), specific approximations for non-Gaussian distributions (e.g., Gamma or Poisson) might produce artifacts if the model is poorly converged. - Negative Variance Estimates: In rare cases using non-Bayesian methods, a model might "over-correct" for grouping, leading to nonsensical negative variance estimates for random intercepts.
4. Interpreting the Gap ($R^2_c - R^2_m$)
In 2026, we interpret the difference between these two values as the importance of the grouping structure.
| Scenario | Result | Interpretation |
|---|---|---|
| $R^2_c \approx R^2_m$ | Small Gap | Random effects add little explanatory power; fixed effects do the heavy lifting. |
| $R^2_c \gg R^2_m$ | Large Gap | Most of the explained variance comes from the differences between groups (clusters). |
| $R^2_m > R^2_c$ | Impossible | Check model convergence and for singular fits. |
Conclusion
Because the Conditional $R^2$ is the sum of both fixed and random effect contributions, it functions as an "upper bound" for the Marginal $R^2$. On Cross Validated, experts will always tell you that if $R^2_m$ appears larger, your model is likely broken or you are looking at a "null" random effect. For 2026 research, always report both values to provide a transparent view of how much your predictors matter versus how much the inherent nesting in your data affects the outcome.
Keywords
marginal r2 vs conditional r2 glmm, can marginal r2 be higher than conditional, nakagawa schielzeth r2 mixed models, glmm goodness of fit 2026, variance explained by fixed vs random effects, MuMIn package r.squaredGLMM interpretation, singular fit mixed model r2, cross validated statistics mixed models.
