Appendix 2: Statistical methods used in 551 data analyses

Statistical methods used in 551 data analyses

Please note: this appendix is not intended to be an exhaustive survey of biostatistical methods. For those interested in statistics, we encourage you to enroll in a statistics course. However, this appendix should provide enough information to allow you to make statistical analyses of your 551 data.

Error within samples: Standard Deviation

Standard deviation is a measure that quantifies the amount of variation in a data set. Whenever you collect multiple data sets, i.e. multiple replicates, you must report the result as the mean +standard deviation. This includes graphical representations of data, where the mean should be plotted as the data point, with error bars representing the standard deviation. It is also important to define in the figure legend that the plot shows the mean with error bars representing standard deviation.

Standard deviations are easily calculated within Excel or other mathematical software.

Outliers: Q test

The Q test is used to identify and discard outliers, assuming a normal distribution. It is only used to discard outliers within a replicate. The Q test should be used sparingly and not more than once within the same replicate. It won’t rescue poor quality data, but it can clean up your results if you have a single point that is far off from others within the same replicate.

The Q test is defined as:


where gap is the difference between the value in question and the closest number to it, and range is the difference between the highest and lowest values in the set.

For example, consider the following triplicate data set:

561   590   10541

Calculate Q, hypothesizing that 10541 is an outlier:

Q=\frac{gap}{range} =\frac{10541 - 590}{10541 - 561} =\frac{9951}{9981} = 0.997

With three data points, at 95% confidence, Q must be greater than 0.970 to discard the point as an outlier (see table below). As our calculation satisfied this requirement, we can conclude that 10541 is an outlier and discard it from the data set. If you do this, you must state in the methods section that a Q test was used to discard outliers at 95% confidence.

Table A.1: Limit values for Q. Limit values are provided for different sample numbers at different confidence levels. If your calculated Q value is above the value appropriate value in the table, you can discard the value in question as an outlier.

Number of replicates: 3 4 5 6 7 8 9 10
90% confidence: .941 .765 .642 .560 .507 .468 .437 .412
95% confidence: .970 .829 .710 .625 .568 .526 .493 .466
99% confidence: .994 .926 .821 .740 .680 .634 .598 .568

Significant Differences: Student’s t test

 The t test is a statistical hypothesis test that can be used to determine if two data sets are significantly different from each other. The test either accepts or rejects the null hypothesis: that there is no difference between two sets of data.

There are a number of different versions of the t test, depending on specific parameters of the data sets in question. For the purposes of 551 lab, we will use a two-tailed t test for two unpaired samples and assume equal variance. Below is a list of tools that can be used to perform a t test, followed by the interpretation of the t test results.

  • Vassarstats:, click on t-Tests & Procedures, choose Independent Samples, and then type in your data.
  • GraphPad QuickCalcs:
  • Microsoft Excel can be used to perform a t test using the command:
    • T.TEST(array1, array2, 2, 2)
    • “2,2” in the parenthesis above indicate the independents samples, and equal variance of samples.
    • You can search excel help for “t test” to get more information on this command.
  • Prism can be used to perform a t test when you run a curve fit on two different data sets (i.e. wild type vs mutant). When running a curve fit, look for the tab “Compare” and select the option to compare best fit values of selected parameters between data sets.

The important output of a t-test is a p-value, which in our case is essentially a measure of the probability that there is no statistical difference between the two samples. It is generally accepted that a p value < 0.05 indicates a statistically significant difference. It is important to note that this cutoff is arbitrary, so a p-value of 0.05001 is not very different from 0.04999.

For example, consider the following Km values (in mM) from three experiments with HCAII and PNPA:

wt: 4.4, 4.1, 4.7

mutant: 2.9, 3.3, 2.9

Using this data and the T.TEST command, Excel calculates a p-value of 0.0038. As this is lower than 0.05, we can conclude that wt and mutant HCAII have a statistically significant difference in Km.

A p-value is well-correlated with the spread of your data. The less spread the values are in each group (smaller standard deviation) and the more different they are from the other group, the smaller the p-value is going to be, meaning there’s a significant difference between the two groups.

For the same reason, a big difference between two group means doesn’t necessarily mean a significant difference. For example, if the Km values for wt are 4.2, 4.3, 6.3, and mutant are 2, 3.1, 3.9. We have a big, but non-significant, difference between the groups (1.9 difference and p-value = 0.089). On the other if the Km values for wt are 4.1, 4.2, 4.3, and for the mutant are: 4,4,4, we’ll have a small, but statistically significant, difference between the two groups (difference= 0.2 and p-value =0.025). This last case would be a time to consider whether statisticalsignificance is necessarily the same as biological significance.

Propagation of error

 When measurements that contain error are used in calculations, that error must be carried through in a way that allows you to estimate the error of the final value. This concept is called the propagation of error, and you can find information online for how to do this under a wide range of circumstances. For 551, we will make the assumption that all error values are indeterminate, which means that values have a symmetric distribution around their mean, and errors are unbiased with respect to sign.

There are some basic rules when carrying indeterminate error through mathematical calculations. First, you calculate the fractional error (to get the error as a percent of the value so that for largely different numbers your error is on the same scale). Then, when manipulating values with error, add the fractional error. Finally, once you have finished the calculation, you turn the fractional error from a percent back into absolute error.

For example, consider a calculation for Kcat for WT HCAII:

k_c_a_t = \frac{V_m_a_x \pm error}{[enzyme] \pm error}

V_m_a_x = 80.30 \pm 8.987

fractional \ error = \frac{8.987}{80.30} = 0.11

you didn’t measure the error of enzyme concentration added so we’ll just assume it’s close to 0

k_c_a_t= \frac{80.30}{0.2} = 401.5

 final \ fractional \  error = 0.11 + 0 = 0.11

Absolute \ error = 401.5 \times 0.11 = 44.17

Therefore,  Kcat = 401.5 +44.17


Icon for the Creative Commons Attribution 4.0 International License

Biochemistry 551 (Online Version) Lab Manual by Lynne PROST is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book