Question 750:

1

No answer provided yet.We are given two payback periods (X1 and X2) based on two independent samples of size 100 and 64. The values of S2 I'm assuming represent the variances of .5  and .6 respectively. The variance is just the standard deviation squared and is often denoted as S2 (or s-squared).

To determine whether there is a statistical difference we perform a two-sample t-test on the difference between means. We will assume the variances are equal (since there is little difference between them from the relatively large samples).

We divide the difference between the means by the standard error of the difference, which is an average of the standard deviations weighted by the square root of the sample sizes =  (.1175). It gets us the value = 1/.1175 = 8.51 which is the test statistic t. We look up this t-value on n1+n2-2 degrees of freedom = 162 and get a p-value well less than .001 (most t-tables only go to a value of 5 or so).

Since the p-value is less than .05 and .001, we reject the null hypothesis. The null hypothesis is that there is no difference between groups. Therefore there is evidence that there is a difference in the average payback period.