## Question 483:

1

There are several important aspects to consider when calculating the difference here. I see that you have control and test groups.

1. If you tested the same people before and after the advertising period you would then have a within subjects design, which is good as it will allow you to be able to detect more of a difference with a smaller sample size. In a within subjects design the within treatment variability is eliminated since you're using the same people.
2. Assuming you're using the same people, your dependent variable is the market share, which I interpret as a proportion. That is 10.8% represents some number of people in the test group that watched the show out of the total group. To answer this question you need the raw numbers used in the percentages (e.g is this 101 out of 1000 or 1080 out of 1000). You cannot answer the question without those values. If you don't know the exact proportion and you can come close then that number can still be used in the calculation, with knowledge that its an assumption. Hopefully you do have the numbers.
3. With the raw numbers the computation is pretty straightforward. You'd use a McNemar Chi-Square test or test of correlated proportions. You'd set your data up in a 2x4 table and compute the relative difference in proportions between test and control groups over time.

 Pre-Ad Post-Ad Test/control Watch Didnt Watch Watch Didnt Watch Watch Didnt Watch

You'll also want to compute an odds ratio with a confidence interval around the likely increase in viewership due to ad exposure. If you send me your data I'm happy to run the computations using the statistical software and send you the annotated results.