Oneway analysis of variance can be viewed as a special case of bivariate analysis, where one variable (X) is a categorical (nominal) variable and the other variable (Y) is an intervalscaled variable. Suppose X is classified into g categories. The interrelationship between X and Y involves a comparison of the distribution of Y among the g categories of X. This comparison may involve comparison of distribution parameters, such as means or variances. The statistical procedure that compares the means of different groups is called Analysis of Variance (ANOVA). It rather seems odd that a procedure that compares means is called analysis of variance, but this name is derived from the fact that in order to test for statistical significance between means, actually variances are compared (i.e., analyzed). The categorical variable is usually referred to as a factor and the categories are called levels. The oneway analysis of variance is also called single  factor analysis.
Y 
Group1 
Group2 
Group. 
Group. 
Group g 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
. 
Sum 
S y1 
S y2 
S yi 
S 
S y_{g} 
Mean 

Overall or grand mean = (S y_{1} + S y_{2}+ …. + …. S y_{g})/n
Essentially, the method depends upon the partitioning of both degrees of freedom and sums of squared deviations between a component, called ‘error’, and the other, called ‘effect’. The sum of squared deviations for effects is also influenced by the error, which is an allpervading uncertainty or noise, distributed such that under the null hypothesis of no effect between the means of different categories (i.e., absence of effect), the expected values of the two sums of squared deviations are proportional to their respective degrees of freedom. Hence, the mean squared deviations (i.e., the sum of squared deviations divided by the degrees of freedom) would have the same expectations. If, however, the effect does exist, it will inflate its own mean squared deviation, but not that of the error. If the effect is large enough, it would lead to significance shown by the F–test. In this context, F equals to the ratio of the mean squared deviations for the effect and that for the error.
The behavior of the Y observations can be modeled as follow:
y_{ij}_{ }= m _{ij} + e _{ij} i=1,2, . . . ,n_{j}_{. }j = 1,2, . . . ,g
where e _{ij} represents a random variable with mean 0 and variance s . It is assumed that e _{ij}_{ }are mutually independent. Under the assumption of homogeneity of variance the model can be written as:
i=1,2, . . . ,n_{j}_{.};_{ } j=1,2, . . .,g
where is an estimate of m_{j} and (y_{ij} – ) is an estimate of e _{ij}_{. }The predicted values for y_{ij} from this model are ŷ_{ij} = .
The total variation in Y over the sample, or the total sum of squares (TSS) is determined by
where y.. is the overall or grand mean given by
where n =
This sum of squares measures the variations in the values of Y around the grand mean (i.e. the sum of squared deviations).
The sum of squares between groups (BSS) measures the variation between the group means and is computed as follows:
Mean square a between groups =MSB = BSS/(g1)
The sum of squares within groups is given by
Mean square within groups = MSW=WSS /(ng).
It can be easily seen that TSS=BSS+ WSS. Thus, the total variation is partitioned into two components –:Within Groups Variance and Between Groups Variance.
Under the normality assumption, the ratio MSB/MSW has an F distribution with (g1) and (ng) degrees of freedom. This test statistic can be used to test the null hypothesis of no difference between the group means. The computation of F statistic is illustrated in the following table, called ANOVA table.
Source
of variance 
Degrees
of freedom 
Sum
of squares 
Mean
square 
F
ratio 
Between

g1 
BSS

MSB=BSS/
(g1) 
MSB/MSW

Within

ng

WSS

MSW=WSS/(ng)


Total

n1 
TSS


