Categorical Methods

  • This page will calculate the first- and second-order partial correlations for four intercorrelated variables, W, X, Y, and Z. If you enter a value of N (providing N>9), the program will also calculate the values of t along with the associated two-tailed probability values.

    0
    No votes yet
  • This page will calculate the value of chi-square for a one- dimensional "goodness of fit" test, for up to 8 mutually exclusive categories labeled A through H. To enter an observed cell frequency, click the cursor into the appropriate cell, then type in the value. Expected values can be entered as either frequencies or proportions. Toward the bottom of the page is an option for estimating the relevant probability via Monte Carlo simulation of the multinomial sampling distribution.

    0
    No votes yet
  • For a situation in which independent binomial events are randomly sampled in sequence, this page will calculate (a) the probability that you will end up with exactly k instances of the outcome in question, with the final (kth) instance occurring on trial N; and (b) the probability that you will have to sample at least N events before finding the kth instance of the outcome.

    0
    No votes yet
  • This page will calculate the lower and upper limits of the 95% confidence interval for the difference between two independent proportions, according to two methods described by Robert Newcombe, both derived from a procedure outlined by E.B.Wilson in 1927. The first method uses the Wilson procedure without a correction for continuity; the second uses the Wilson procedure with a correction for continuity.

    0
    No votes yet
  • Calculates the z-ratio and associated one-tail and two-tail probabilities for the difference between two correlated proportions, such as might be found in the case where the proportions are based on the same sample of subjects or on matched samples.

    0
    No votes yet
  • Calculates the z-ratio and associated one-tail and two-tail probabilities for the difference between two independent proportions.

    0
    No votes yet
  • Using the Fisher r-to-z transformation, this page will calculate a value of z that can be applied to assess the significance of the difference between two correlation coefficients, r_a and r_b, found in two independent samples. If r_a is greater than r_b, the resulting value of z will have a positive sign; if r_a is smaller than r_b, the sign of z will be negative.

    0
    No votes yet
  • This page will calculate the 0.95 and 0.99 confidence intervals for rho, based on the Fisher r-to-z transformation. To perform the calculations, enter the values of r and n in the designated places, then click the "Calculate" button. Note that the confidence interval of rho is symmetrical around the observed r only with large values of n.

    0
    No votes yet
  • This page will perform the procedure for up to k=12 sample values of r, with a minimum of k=2. It will also perform a chi-square test for the homogeneity of the k values of r, with df=k-1. The several values of r can be regarded as coming from the same population only if the observed chi-square value proves the be non-significant.

    0
    No votes yet
  • Using the Fisher r-to-z transformation, this page will calculate a value of z that can be applied to assess the significance of the difference between r, the correlation observed within a sample of size n and rho, the correlation hypothesized to exist within the population of bivariate values from which the sample is randomly drawn. If r is greater than rho, the resulting value of z will have a positive sign; if r is smaller than rho, the sign of z will be negative.

    0
    No votes yet

Pages

register