This R markdown document provides R code and output for multiple datasets when using the LOGLINEAR function in the Crosstabs.Loglinear package.
library(Crosstabs.Loglinear)
**************************************************************************************************
Crosstabs.Loglinear 0.1.1
Please contact Brian O'Connor at brian.oconnor@ubc.ca if you have questions or suggestions.
**************************************************************************************************
LOGLINEAR(data = datasets$Field_2018,
data_type = 'counts',
variables=c('Animal', 'Training', 'Dance'),
Freq = 'Freq' )
The input data:
, , Dance = danced
Training
Animal affection food
Cat 48 28
Dog 29 20
, , Dance = did not dance
Training
Animal affection food
Cat 114 10
Dog 7 14
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p AIC
1 7 200.163 0e+00 253.556 0e+00 242.134
2 4 72.267 0e+00 67.174 0e+00 120.238
3 1 20.305 1e-05 20.777 1e-05 74.275
0 0 0.000 1e+00 0.000 1e+00 55.971
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 3 127.896 0e+00 186.382 0e+00
2 3 51.962 0e+00 46.396 0e+00
3 1 20.305 1e-05 20.777 1e-05
AIC diff.
121.896
45.962
18.305
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Animal:Training 13.76 1 0.00021 11.76
3 Animal:Dance 13.748 1 0.00021 11.748
4 Training:Dance 8.611 1 0.00334 6.611
5
6 Animal 65.268 1 0 63.268
7 Training 61.145 1 0 59.145
8 Dance 1.483 1 0.22333 -0.517
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p
(Intercept) 3.177 0.083 38.117 0.00000
Animal1 0.404 0.083 4.843 0.00000
Training1 0.328 0.083 3.937 0.00008
Dance1 0.232 0.083 2.782 0.00540
Animal1:Training1 0.402 0.083 4.823 0.00000
Animal1:Dance1 -0.197 0.083 -2.364 0.01809
Training1:Dance1 -0.104 0.083 -1.251 0.21086
Animal1:Training1:Dance1 -0.360 0.083 -4.320 0.00002
CI_lb CI_ub
(Intercept) 3.014 3.341
Animal1 0.240 0.567
Training1 0.165 0.492
Dance1 0.069 0.395
Animal1:Training1 0.239 0.565
Animal1:Dance1 -0.360 -0.034
Training1:Dance1 -0.268 0.059
Animal1:Training1:Dance1 -0.523 -0.197
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df
0 Generating Class Animal:Training:Dance 0 0
Deleted Effect Animal:Training:Dance 20.305 1
p AIC
1 55.971
1e-05 74.275
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Animal + Training + Dance + Animal:Training + Animal:Dance + Training:Dance + Animal:Training:Dance
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 55.971
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value
(Intercept) 3.871 0.144 26.820
AnimalDog -0.504 0.235 -2.143
Trainingfood -0.539 0.238 -2.267
Dancedid not dance 0.865 0.172 5.027
AnimalDog:Trainingfood 0.167 0.376 0.446
AnimalDog:Dancedid not dance -2.286 0.455 -5.026
Trainingfood:Dancedid not dance -1.895 0.407 -4.660
AnimalDog:Trainingfood:Dancedid not dance 2.959 0.681 4.344
Pr(>|z|)
(Intercept) 0.000
AnimalDog 0.032
Trainingfood 0.023
Dancedid not dance 0.000
AnimalDog:Trainingfood 0.656
AnimalDog:Dancedid not dance 0.000
Trainingfood:Dancedid not dance 0.000
AnimalDog:Trainingfood:Dancedid not dance 0.000
Cell Counts and Residuals:
Animal Training Dance Obsd. Freq. Exp. Freq.
1 Cat affection danced 48 48
5 Cat affection did not dance 114 114
3 Cat food danced 28 28
7 Cat food did not dance 10 10
2 Dog affection danced 29 29
6 Dog affection did not dance 7 7
4 Dog food danced 20 20
8 Dog food did not dance 14 14
Residuals Std. Resid. Adjusted Resid.
1 0 0 0
5 0 0 0
3 0 0 0
7 0 0 0
2 0 0 0
6 0 0 0
4 0 0 0
8 0 0 0
# when 'data' is a file with the counts/frequencies (rather than raw data points)
LOGLINEAR(data = datasets$Field_2018_raw,
data_type = 'raw',
variables=c('Animal', 'Training', 'Dance'),
Freq = NULL )
The input data:
, , Dance = danced
Training
Animal affection food
Cat 48 28
Dog 29 20
, , Dance = did not dance
Training
Animal affection food
Cat 114 10
Dog 7 14
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p AIC
1 7 200.163 0e+00 253.556 0e+00 242.134
2 4 72.267 0e+00 67.174 0e+00 120.238
3 1 20.305 1e-05 20.777 1e-05 74.275
0 0 0.000 1e+00 0.000 1e+00 55.971
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 3 127.896 0e+00 186.382 0e+00
2 3 51.962 0e+00 46.396 0e+00
3 1 20.305 1e-05 20.777 1e-05
AIC diff.
121.896
45.962
18.305
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Animal:Training 13.76 1 0.00021 11.76
3 Animal:Dance 13.748 1 0.00021 11.748
4 Training:Dance 8.611 1 0.00334 6.611
5
6 Animal 65.268 1 0 63.268
7 Training 61.145 1 0 59.145
8 Dance 1.483 1 0.22333 -0.517
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p
(Intercept) 3.177 0.083 38.117 0.00000
Animal1 0.404 0.083 4.843 0.00000
Training1 0.328 0.083 3.937 0.00008
Dance1 0.232 0.083 2.782 0.00540
Animal1:Training1 0.402 0.083 4.823 0.00000
Animal1:Dance1 -0.197 0.083 -2.364 0.01809
Training1:Dance1 -0.104 0.083 -1.251 0.21086
Animal1:Training1:Dance1 -0.360 0.083 -4.320 0.00002
CI_lb CI_ub
(Intercept) 3.014 3.341
Animal1 0.240 0.567
Training1 0.165 0.492
Dance1 0.069 0.395
Animal1:Training1 0.239 0.565
Animal1:Dance1 -0.360 -0.034
Training1:Dance1 -0.268 0.059
Animal1:Training1:Dance1 -0.523 -0.197
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df
0 Generating Class Animal:Training:Dance 0 0
Deleted Effect Animal:Training:Dance 20.305 1
p AIC
1 55.971
1e-05 74.275
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Animal + Training + Dance + Animal:Training + Animal:Dance + Training:Dance + Animal:Training:Dance
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 55.971
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value
(Intercept) 3.871 0.144 26.820
AnimalDog -0.504 0.235 -2.143
Trainingfood -0.539 0.238 -2.267
Dancedid not dance 0.865 0.172 5.027
AnimalDog:Trainingfood 0.167 0.376 0.446
AnimalDog:Dancedid not dance -2.286 0.455 -5.026
Trainingfood:Dancedid not dance -1.895 0.407 -4.660
AnimalDog:Trainingfood:Dancedid not dance 2.959 0.681 4.344
Pr(>|z|)
(Intercept) 0.000
AnimalDog 0.032
Trainingfood 0.023
Dancedid not dance 0.000
AnimalDog:Trainingfood 0.656
AnimalDog:Dancedid not dance 0.000
Trainingfood:Dancedid not dance 0.000
AnimalDog:Trainingfood:Dancedid not dance 0.000
Cell Counts and Residuals:
Animal Training Dance Obsd. Freq. Exp. Freq.
1 Cat affection danced 48 48
5 Cat affection did not dance 114 114
3 Cat food danced 28 28
7 Cat food did not dance 10 10
2 Dog affection danced 29 29
6 Dog affection did not dance 7 7
4 Dog food danced 20 20
8 Dog food did not dance 14 14
Residuals Std. Resid. Adjusted Resid.
1 0 0 0
5 0 0 0
3 0 0 0
7 0 0 0
2 0 0 0
6 0 0 0
4 0 0 0
8 0 0 0
# example of creating and entering a two-dimensional contingency table for 'data'
<- c(28, 10)
food <- c(48, 114)
affection <- as.table(rbind(food, affection))
Field_2018_cats_conTable colnames(Field_2018_cats_conTable) <- c('danced', 'did not dance')
# add dimension names to the table
names(attributes(Field_2018_cats_conTable)$dimnames) <- c('Training','Dance')
LOGLINEAR(data = Field_2018_cats_conTable,
data_type = 'cont.table',
variables=c('Training', 'Dance') )
The input data:
Dance
Training danced did not dance
food 28 10
affection 48 114
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p AIC
1 3 119.334 0 123.680 0 142.956
2 1 24.932 0 25.356 0 52.553
0 0 0.000 1 0.000 1 29.621
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p AIC diff.
1 2 94.403 0 98.324 0 90.403
2 1 24.932 0 25.356 0 22.932
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Training 82.77 1 0 80.77
3 Dance 11.633 1 0.00065 9.633
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p CI_lb
(Intercept) 3.581 0.1 35.845 0.00000 3.385
Training1 -0.730 0.1 -7.310 0.00000 -0.926
Dance1 0.035 0.1 0.349 0.72698 -0.161
Training1:Dance1 0.464 0.1 4.649 0.00000 0.269
CI_ub
(Intercept) 3.777
Training1 -0.534
Dance1 0.231
Training1:Dance1 0.660
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df p
0 Generating Class Training:Dance 0 0 1
Deleted Effect Training:Dance 24.932 1 0
AIC
29.621
52.553
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Training + Dance + Training:Dance
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 29.621
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value
(Intercept) 3.332 0.189 17.632
Trainingaffection 0.539 0.238 2.267
Dancedid not dance -1.030 0.368 -2.795
Trainingaffection:Dancedid not dance 1.895 0.407 4.660
Pr(>|z|)
(Intercept) 0.000
Trainingaffection 0.023
Dancedid not dance 0.005
Trainingaffection:Dancedid not dance 0.000
Cell Counts and Residuals:
Training Dance Obsd. Freq. Exp. Freq. Residuals
1 food danced 28 28 0
3 food did not dance 10 10 0
2 affection danced 48 48 0
4 affection did not dance 114 114 0
Std. Resid. Adjusted Resid.
1 0 0
3 0 0
2 0 0
4 0 0
LOGLINEAR(data = datasets$Gray_2012_2way,
data_type = 'counts',
variables=c('Group','Presence'),
Freq = 'Freq')
The input data:
Presence
Group No Yes
Type A 14 8
Type B 11 7
Type C 5 7
Type Critical 6 21
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 7 18.048 0.01175 20.342 0.00488
2 3 11.093 0.01123 10.655 0.01374
0 0 0.000 1.00000 0.000 1.00000
AIC
52.370
53.415
48.321
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 4 6.955 0.13828 9.686 0.04605
2 3 11.093 0.01123 10.655 0.01374
AIC diff.
-1.045
5.093
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Group 6.334 3 0.09645 0.334
3 Presence 0.621 1 0.43065 -1.379
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p CI_lb
(Intercept) 2.241 0.120 18.670 0.00000 2.006
Group1 0.166 0.194 0.853 0.39358 -0.215
Group2 -0.013 0.205 -0.062 0.95038 -0.414
Group3 -0.382 0.232 -1.645 0.10000 -0.836
Presence1 -0.068 0.120 -0.567 0.57049 -0.303
Group1:Presence1 0.335 0.194 1.725 0.08449 -0.046
Group2:Presence1 0.282 0.205 1.376 0.16881 -0.120
Group3:Presence1 -0.087 0.232 -0.375 0.70772 -0.542
CI_ub
(Intercept) 2.477
Group1 0.546
Group2 0.389
Group3 0.073
Presence1 0.167
Group1:Presence1 0.716
Group2:Presence1 0.683
Group3:Presence1 0.368
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df
0 Generating Class Group:Presence 0 0
Deleted Effect Group:Presence 11.093 3
p AIC
1 48.321
0.01123 53.415
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Group + Presence + Group:Presence
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 1 0 0 48.321
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.639 0.267 9.874 0.000
GroupType B -0.241 0.403 -0.599 0.549
GroupType C -1.030 0.521 -1.976 0.048
GroupType Critical -0.847 0.488 -1.736 0.082
PresenceYes -0.560 0.443 -1.263 0.207
GroupType B:PresenceYes 0.108 0.656 0.164 0.870
GroupType C:PresenceYes 0.896 0.734 1.220 0.222
GroupType Critical:PresenceYes 1.812 0.641 2.828 0.005
Cell Counts and Residuals:
Group Presence Obsd. Freq. Exp. Freq. Residuals
1 Type A No 14 14 0
5 Type A Yes 8 8 0
2 Type B No 11 11 0
6 Type B Yes 7 7 0
3 Type C No 5 5 0
7 Type C Yes 7 7 0
4 Type Critical No 6 6 0
8 Type Critical Yes 21 21 0
Std. Resid. Adjusted Resid.
1 0 0
5 0 0
2 0 0
6 0 0
3 0 0
7 0 0
4 0 0
8 0 0
LOGLINEAR(data=datasets$Gray_2012_3way,
data_type = 'counts',
variables=c('Interviewer','Participant','Help'),
Freq = 'Freq')
The input data:
, , Help = No
Participant
Interviewer Female Male
Female 14 14
Male 9 21
, , Help = Yes
Participant
Interviewer Female Male
Female 11 11
Male 16 4
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 7 15.382 0.03140 14.240 0.04707
2 4 12.811 0.01224 11.987 0.01745
3 1 6.659 0.00987 6.521 0.01066
0 0 0.000 1.00000 0.000 1.00000
AIC
51.692
55.121
54.969
50.310
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 3 2.571 0.46259 2.253 0.52156
2 3 6.152 0.10444 5.466 0.14070
3 1 6.659 0.00987 6.521 0.01066
AIC diff.
-3.429
0.152
4.659
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Interviewer:Participant 0.01 1 0.91903 -1.99
3 Interviewer:Help 0.175 1 0.67607 -1.825
4 Participant:Help 5.988 1 0.0144 3.988
5
6 Interviewer 0 1 1 -2
7 Participant 0 1 1 -2
8 Help 2.571 1 0.10884 0.571
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p
(Intercept) 2.482 0.108 22.987 0.00000
Interviewer1 0.076 0.108 0.702 0.48290
Participant1 0.060 0.108 0.558 0.57651
Help1 0.184 0.108 1.708 0.08767
Interviewer1:Participant1 -0.060 0.108 -0.558 0.57651
Interviewer1:Help1 -0.069 0.108 -0.635 0.52567
Participant1:Help1 -0.265 0.108 -2.449 0.01432
Interviewer1:Participant1:Help1 0.265 0.108 2.449 0.01432
CI_lb CI_ub
(Intercept) 2.271 2.694
Interviewer1 -0.136 0.287
Participant1 -0.151 0.272
Help1 -0.027 0.396
Interviewer1:Participant1 -0.272 0.151
Interviewer1:Help1 -0.280 0.143
Participant1:Help1 -0.476 -0.053
Interviewer1:Participant1:Help1 0.053 0.476
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square
0 Generating Class Interviewer:Participant:Help 0
Deleted Effect Interviewer:Participant:Help 6.659
df p AIC
0 1 50.31
1 0.00987 54.969
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Interviewer + Participant + Help + Interviewer:Participant + Interviewer:Help + Participant:Help + Interviewer:Participant:Help
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 1 0 0 50.31
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value
(Intercept) 2.639 0.267 9.874
InterviewerMale -0.442 0.427 -1.034
ParticipantMale 0.000 0.378 0.000
HelpYes -0.241 0.403 -0.599
InterviewerMale:ParticipantMale 0.847 0.549 1.543
InterviewerMale:HelpYes 0.817 0.580 1.409
ParticipantMale:HelpYes 0.000 0.570 0.000
InterviewerMale:ParticipantMale:HelpYes -2.234 0.892 -2.504
Pr(>|z|)
(Intercept) 0.000
InterviewerMale 0.301
ParticipantMale 1.000
HelpYes 0.549
InterviewerMale:ParticipantMale 0.123
InterviewerMale:HelpYes 0.159
ParticipantMale:HelpYes 1.000
InterviewerMale:ParticipantMale:HelpYes 0.012
Cell Counts and Residuals:
Interviewer Participant Help Obsd. Freq. Exp. Freq.
1 Female Female No 14 14
5 Female Female Yes 11 11
3 Female Male No 14 14
7 Female Male Yes 11 11
2 Male Female No 9 9
6 Male Female Yes 16 16
4 Male Male No 21 21
8 Male Male Yes 4 4
Residuals Std. Resid. Adjusted Resid.
1 0 0 0
5 0 0 0
3 0 0 0
7 0 0 0
2 0 0 0
6 0 0 0
4 0 0 0
8 0 0 0
LOGLINEAR(data = datasets$Howell_2017,
data_type = 'counts',
variables=c('Drug','Heart_Attack'),
Freq = 'Freq')
The input data:
Heart_Attack
Drug No Yes
Aspirin 10933 104
Placebo 10845 189
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p AIC
1 3 27507.580 0 20915.915 0 27545.411
2 1 25.372 0 25.014 0 67.203
0 0 0.000 1 0.000 1 43.831
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p AIC diff.
1 2 27482.208 0 20890.901 0 27478.208
2 1 25.372 0 25.014 0 23.372
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Drug 0 1 0.98389 -2
3 Heart_Attack 27482.207 1 0 27480.207
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p CI_lb
(Intercept) 7.121 0.031 232.343 0 7.061
Drug1 -0.147 0.031 -4.789 0 -0.207
Heart_Attack1 2.174 0.031 70.944 0 2.114
Drug1:Heart_Attack1 0.151 0.031 4.921 0 0.091
CI_ub
(Intercept) 7.181
Drug1 -0.087
Heart_Attack1 2.234
Drug1:Heart_Attack1 0.211
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df p
0 Generating Class Drug:Heart_Attack 0 0 1
Deleted Effect Drug:Heart_Attack 25.372 1 0
AIC
43.831
67.203
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Drug + Heart_Attack + Drug:Heart_Attack
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 43.831
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 9.300 0.010 972.369 0.000
DrugPlacebo -0.008 0.014 -0.596 0.551
Heart_AttackYes -4.655 0.099 -47.249 0.000
DrugPlacebo:Heart_AttackYes 0.605 0.123 4.929 0.000
Cell Counts and Residuals:
Drug Heart_Attack Obsd. Freq. Exp. Freq. Residuals
1 Aspirin No 10933 10933 0
3 Aspirin Yes 104 104 0
2 Placebo No 10845 10845 0
4 Placebo Yes 189 189 0
Std. Resid. Adjusted Resid.
1 0 0
3 0 0
2 0 0
4 0 0
LOGLINEAR(data = datasets$Meyers_2013,
data_type = 'counts',
variables=c('physically_active','obesity', 'hist_myocardial_infarction'),
Freq = 'Freq')
The input data:
, , hist_myocardial_infarction = no
obesity
physically_active no yes
not active 262 80
active 365 58
, , hist_myocardial_infarction = yes
obesity
physically_active no yes
not active 108 81
active 65 29
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 7 639.674 0.00000 745.313 0.00000
2 4 103.288 0.00000 118.908 0.00000
3 1 0.165 0.68419 0.166 0.68371
0 0 0.000 1.00000 0.000 1.00000
AIC
692.924
162.538
65.415
67.250
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 3 536.386 0.00000 626.405 0.00000
2 3 103.123 0.00000 118.742 0.00000
3 1 0.165 0.68419 0.166 0.68371
AIC diff.
530.386
97.123
-1.835
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df
1
2 physically_active:obesity 15.635 1
3 physically_active:hist_myocardial_infarction 29.821 1
4 obesity:hist_myocardial_infarction 35.461 1
5
6 physically_active 0.187 1
7 obesity 305.953 1
8 hist_myocardial_infarction 230.246 1
p AIC.diff.
1
2 8e-05 13.635
3 0 27.821
4 0 33.461
5
6 0.6654 -1.813
7 0 303.953
8 0 228.246
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate
(Intercept) 4.573
physically_active1 0.189
obesity1 0.512
hist_myocardial_infarction1 0.409
physically_active1:obesity1 -0.145
physically_active1:hist_myocardial_infarction1 -0.192
obesity1:hist_myocardial_infarction1 0.241
physically_active1:obesity1:hist_myocardial_infarction1 -0.017
Std. Error
(Intercept) 0.041
physically_active1 0.041
obesity1 0.041
hist_myocardial_infarction1 0.041
physically_active1:obesity1 0.041
physically_active1:hist_myocardial_infarction1 0.041
obesity1:hist_myocardial_infarction1 0.041
physically_active1:obesity1:hist_myocardial_infarction1 0.041
z value p
(Intercept) 111.986 0.00000
physically_active1 4.620 0.00000
obesity1 12.545 0.00000
hist_myocardial_infarction1 10.025 0.00000
physically_active1:obesity1 -3.556 0.00038
physically_active1:hist_myocardial_infarction1 -4.692 0.00000
obesity1:hist_myocardial_infarction1 5.909 0.00000
physically_active1:obesity1:hist_myocardial_infarction1 -0.425 0.67106
CI_lb CI_ub
(Intercept) 4.493 4.653
physically_active1 0.109 0.269
obesity1 0.432 0.592
hist_myocardial_infarction1 0.329 0.489
physically_active1:obesity1 -0.225 -0.065
physically_active1:hist_myocardial_infarction1 -0.272 -0.112
obesity1:hist_myocardial_infarction1 0.161 0.321
physically_active1:obesity1:hist_myocardial_infarction1 -0.097 0.063
Backward Elimination Statistics:
Step GenDel
0 Generating Class
Deleted Effect
1 Generating Class
Deleted Effect Test
Deleted Effect Test
Deleted Effect Test
Deleted On This Step
Effects LR_Chi_Square df
physically_active:obesity:hist_myocardial_infarction 0 0
physically_active:obesity:hist_myocardial_infarction 0.165 1
All of these terms: 0.165 1
physically_active
obesity
hist_myocardial_infarction
physically_active:obesity
physically_active:hist_myocardial_infarction
obesity:hist_myocardial_infarction
physically_active:obesity 15.635 1
physically_active:hist_myocardial_infarction 29.821 1
obesity:hist_myocardial_infarction 35.461 1
none deleted
p AIC
1 67.25
0.68419 65.415
0.68419 -1.835
8e-05 13.635
0 27.821
0 33.461
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ physically_active + obesity + hist_myocardial_infarction + physically_active:obesity + physically_active:hist_myocardial_infarction + obesity:hist_myocardial_infarction
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
1 0.165 0.68419 0.166 0.68371 65.415
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error
(Intercept) 5.573 0.061
physically_activeactive 0.323 0.078
obesityyes -1.207 0.118
hist_myocardial_infarctionyes -0.902 0.108
physically_activeactive:obesityyes -0.608 0.155
physically_activeactive:hist_myocardial_infarctionyes -0.801 0.149
obesityyes:hist_myocardial_infarctionyes 0.946 0.157
z value Pr(>|z|)
(Intercept) 92.042 0
physically_activeactive 4.123 0
obesityyes -10.200 0
hist_myocardial_infarctionyes -8.389 0
physically_activeactive:obesityyes -3.917 0
physically_activeactive:hist_myocardial_infarctionyes -5.379 0
obesityyes:hist_myocardial_infarctionyes 6.013 0
Cell Counts and Residuals:
physically_active obesity hist_myocardial_infarction Obsd. Freq.
1 not active no no 262
5 not active no yes 108
3 not active yes no 80
7 not active yes yes 81
2 active no no 365
6 active no yes 65
4 active yes no 58
8 active yes yes 29
Exp. Freq. Residuals Std. Resid. Adjusted Resid.
1 263.235 -1.235 -0.076 -0.076
5 106.765 1.235 0.120 0.119
3 78.765 1.235 0.139 0.139
7 82.235 -1.235 -0.136 -0.137
2 363.765 1.235 0.065 0.065
6 66.235 -1.235 -0.152 -0.152
4 59.235 -1.235 -0.161 -0.161
8 27.765 1.235 0.234 0.233
LOGLINEAR(data = datasets$Noursis_2012_marital,
data_type = 'counts',
variables=c('Marital_Status','Gen.Happiness'),
Freq = 'Freq' )
The input data:
Gen.Happiness
Marital_Status Happy Not Happy
Married 566 38
Split 320 72
Never Married 313 60
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p AIC
1 5 980.206 0 958.040 0 1023.106
2 2 40.480 0 38.217 0 89.380
0 0 0.000 1 0.000 1 52.900
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p AIC diff.
1 3 939.725 0 919.823 0 933.725
2 2 40.480 0 38.217 0 36.480
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Marital_Status 69.098 2 0 65.098
3 Gen.Happiness 870.627 1 0 868.627
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p
(Intercept) 4.982 0.042 117.826 0.00000
Marital_Status1 0.013 0.064 0.199 0.84245
Marital_Status2 0.044 0.057 0.785 0.43241
Gen.Happiness1 0.970 0.042 22.940 0.00000
Marital_Status1:Gen.Happiness1 0.374 0.064 5.847 0.00000
Marital_Status2:Gen.Happiness1 -0.227 0.057 -4.013 0.00006
CI_lb CI_ub
(Intercept) 4.899 5.065
Marital_Status1 -0.113 0.138
Marital_Status2 -0.066 0.155
Gen.Happiness1 0.887 1.053
Marital_Status1:Gen.Happiness1 0.249 0.500
Marital_Status2:Gen.Happiness1 -0.338 -0.116
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square
0 Generating Class Marital_Status:Gen.Happiness 0
Deleted Effect Marital_Status:Gen.Happiness 40.48
df p AIC
0 1 52.9
2 0 89.38
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Marital_Status + Gen.Happiness + Marital_Status:Gen.Happiness
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 52.9
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error
(Intercept) 6.339 0.042
Marital_StatusSplit -0.570 0.070
Marital_StatusNever Married -0.592 0.070
Gen.HappinessNot Happy -2.701 0.168
Marital_StatusSplit:Gen.HappinessNot Happy 1.209 0.212
Marital_StatusNever Married:Gen.HappinessNot Happy 1.049 0.219
z value Pr(>|z|)
(Intercept) 150.800 0
Marital_StatusSplit -8.154 0
Marital_StatusNever Married -8.410 0
Gen.HappinessNot Happy -16.118 0
Marital_StatusSplit:Gen.HappinessNot Happy 5.695 0
Marital_StatusNever Married:Gen.HappinessNot Happy 4.791 0
Cell Counts and Residuals:
Marital_Status Gen.Happiness Obsd. Freq. Exp. Freq. Residuals
1 Married Happy 566 566 0
4 Married Not Happy 38 38 0
2 Split Happy 320 320 0
5 Split Not Happy 72 72 0
3 Never Married Happy 313 313 0
6 Never Married Not Happy 60 60 0
Std. Resid. Adjusted Resid.
1 0 0
4 0 0
2 0 0
5 0 0
3 0 0
6 0 0
LOGLINEAR(data = datasets$Noursis_2012_voting_degree,
data_type = 'counts',
variables=c('Vote','College.Degree'),
Freq = 'Freq' )
The input data:
College.Degree
Vote No Yes
No 369 50
Yes 659 372
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p AIC
1 3 622.030 0 512.279 0 653.618
2 1 94.241 0 84.199 0 129.829
0 0 0.000 1 0.000 1 37.588
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p AIC diff.
1 2 527.789 0 428.079 0 523.789
2 1 94.241 0 84.199 0 92.241
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Vote 266.581 1 0 264.581
3 College.Degree 261.208 1 0 259.208
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p CI_lb
(Intercept) 5.561 0.041 136.119 0 5.481
Vote1 -0.644 0.041 -15.772 0 -0.724
College.Degree1 0.640 0.041 15.673 0 0.560
Vote1:College.Degree1 0.355 0.041 8.682 0 0.275
CI_ub
(Intercept) 5.642
Vote1 -0.564
College.Degree1 0.720
Vote1:College.Degree1 0.435
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df p
0 Generating Class Vote:College.Degree 0 0 1
Deleted Effect Vote:College.Degree 94.241 1 0
AIC
37.588
129.829
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Vote + College.Degree + Vote:College.Degree
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 37.588
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.911 0.052 113.543 0
VoteYes 0.580 0.065 8.919 0
College.DegreeYes -1.999 0.151 -13.263 0
VoteYes:College.DegreeYes 1.427 0.164 8.698 0
Cell Counts and Residuals:
Vote College.Degree Obsd. Freq. Exp. Freq. Residuals
1 No No 369 369 0
3 No Yes 50 50 0
2 Yes No 659 659 0
4 Yes Yes 372 372 0
Std. Resid. Adjusted Resid.
1 0 0
3 0 0
2 0 0
4 0 0
LOGLINEAR(data = datasets$Stevens_2009_HeadStart_1,
data_type = 'counts',
variables=c('SEX', 'ATTITUDE'),
Freq = 'Freq')
The input data:
ATTITUDE
SEX 1 2
1 33 7
2 37 23
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 3 25.678 0.00001 21.44 0.00009
2 1 5.194 0.02266 4.96 0.02594
0 0 0.000 1.00000 0.00 1.00000
AIC
47.259
30.775
27.581
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 2 20.484 0.00004 16.48 0.00026
2 1 5.194 0.02266 4.96 0.02594
AIC diff.
16.484
3.194
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 SEX 4.027 1 0.04477 2.027
3 ATTITUDE 16.457 1 5e-05 14.457
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p CI_lb
(Intercept) 3.077 0.121 25.530 0.00000 2.841
SEX1 -0.314 0.121 -2.603 0.00924 -0.550
ATTITUDE1 0.491 0.121 4.074 0.00005 0.255
SEX1:ATTITUDE1 0.257 0.121 2.135 0.03275 0.021
CI_ub
(Intercept) 3.313
SEX1 -0.078
ATTITUDE1 0.727
SEX1:ATTITUDE1 0.494
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df p
0 Generating Class SEX:ATTITUDE 0 0 1
Deleted Effect SEX:ATTITUDE 5.194 1 0.02266
AIC
27.581
30.775
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ SEX + ATTITUDE + SEX:ATTITUDE
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
0 0 0 0 0 27.581
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.497 0.174 20.086 0.000
SEX2 0.114 0.239 0.478 0.633
ATTITUDE2 -1.551 0.416 -3.726 0.000
SEX2:ATTITUDE2 1.075 0.494 2.178 0.029
Cell Counts and Residuals:
SEX ATTITUDE Obsd. Freq. Exp. Freq. Residuals Std. Resid.
1 1 1 33 33 0 0
3 1 2 7 7 0 0
2 2 1 37 37 0 0
4 2 2 23 23 0 0
Adjusted Resid.
1 0
3 0
2 0
4 0
LOGLINEAR(data = datasets$Stevens_2009_HeadStart_2,
data_type = 'counts',
variables=c('EDUC', 'TREAT', 'TEST'),
Freq = 'Freq')
The input data:
, , TEST = 1
TREAT
EDUC 1 2
1 11 56
2 14 44
3 17 35
, , TEST = 2
TREAT
EDUC 1 2
1 0 15
2 8 14
3 10 22
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 11 140.114 0.00000 142.878 0.00000
2 7 23.243 0.00155 18.318 0.01061
3 2 5.974 0.05044 4.107 0.12829
0 0 0.000 1.00000 0.000 1.00000
AIC
194.502
85.631
78.362
76.388
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 4 116.871 0.00000 124.560 0.00000
2 5 17.269 0.00402 14.211 0.01432
3 2 5.974 0.05044 4.107 0.12829
AIC diff.
108.871
7.269
1.974
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 EDUC:TREAT 8.94 2 0.01145 4.94
3 EDUC:TEST 8.045 2 0.01791 4.045
4 TREAT:TEST 0.014 1 0.90717 -1.986
5
6 EDUC 0.098 2 0.95239 -3.902
7 TREAT 67.704 1 0 65.704
8 TEST 49.069 1 0 47.069
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p CI_lb
(Intercept) 2.642 0.136 19.395 0.00000 2.375
EDUC1 -0.511 0.252 -2.024 0.04298 -1.006
EDUC2 0.179 0.156 1.146 0.25160 -0.127
TREAT1 -0.679 0.136 -4.986 0.00000 -0.946
TEST1 0.588 0.136 4.313 0.00002 0.321
EDUC1:TREAT1 -0.577 0.252 -2.286 0.02224 -1.072
EDUC2:TREAT1 0.265 0.156 1.701 0.08901 -0.040
EDUC1:TEST1 0.520 0.252 2.058 0.03958 0.025
EDUC2:TEST1 -0.174 0.156 -1.113 0.26553 -0.480
TREAT1:TEST1 0.109 0.136 0.801 0.42304 -0.158
EDUC1:TREAT1:TEST1 0.351 0.252 1.392 0.16401 -0.143
EDUC2:TREAT1:TEST1 -0.256 0.156 -1.640 0.10095 -0.562
CI_ub
(Intercept) 2.909
EDUC1 -0.016
EDUC2 0.485
TREAT1 -0.412
TEST1 0.855
EDUC1:TREAT1 -0.082
EDUC2:TREAT1 0.571
EDUC1:TEST1 1.014
EDUC2:TEST1 0.132
TREAT1:TEST1 0.376
EDUC1:TREAT1:TEST1 0.846
EDUC2:TREAT1:TEST1 0.050
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df
0 Generating Class EDUC:TREAT:TEST 0 0
Deleted Effect EDUC:TREAT:TEST 5.974 2
1 Generating Class All of these terms: 5.974 2
EDUC
TREAT
TEST
EDUC:TREAT
EDUC:TEST
TREAT:TEST
Deleted Effect Test EDUC:TREAT 8.94 2
Deleted Effect Test EDUC:TEST 8.045 2
Deleted Effect Test TREAT:TEST 0.014 1
Deleted On This Step TREAT:TEST
2 Generating Class All of these terms: 5.988 3
EDUC
TREAT
TEST
EDUC:TREAT
EDUC:TEST
Deleted Effect Test EDUC:TREAT 9.075 2
Deleted Effect Test EDUC:TEST 8.18 2
Deleted On This Step none deleted
p AIC
1 76.388
0.05044 78.362
0.05044 1.974
0.01145 4.94
0.01791 4.045
0.90717 -1.986
0.11221 -0.012
0.0107 5.075
0.01674 4.18
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ EDUC + TREAT + TEST + EDUC:TREAT + EDUC:TEST
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
3 5.988 0.11221 4.059 0.25518 76.376
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.196 0.306 7.176 0.000
EDUC2 0.574 0.379 1.512 0.130
EDUC3 0.620 0.371 1.670 0.095
TREAT2 1.865 0.324 5.755 0.000
TEST2 -1.497 0.286 -5.240 0.000
EDUC2:TREAT2 -0.895 0.409 -2.187 0.029
EDUC3:TREAT2 -1.118 0.399 -2.798 0.005
EDUC2:TEST2 0.527 0.380 1.388 0.165
EDUC3:TEST2 1.011 0.363 2.782 0.005
Cell Counts and Residuals:
EDUC TREAT TEST Obsd. Freq. Exp. Freq. Residuals
1 1 1 1 11 8.988 2.012
7 1 1 2 0 2.012 -2.012
4 1 2 1 56 58.012 -2.012
10 1 2 2 15 12.988 2.012
2 2 1 1 14 15.950 -1.950
8 2 1 2 8 6.050 1.950
5 2 2 1 44 42.050 1.950
11 2 2 2 14 15.950 -1.950
3 3 1 1 17 16.714 0.286
9 3 1 2 10 10.286 -0.286
6 3 2 1 35 35.286 -0.286
12 3 2 2 22 21.714 0.286
Std. Resid. Adjusted Resid.
1 0.671 0.648
7 -1.419 -2.006
4 -0.264 -0.266
10 0.558 0.545
2 -0.488 -0.499
8 0.793 0.755
5 0.301 0.298
11 -0.488 -0.499
3 0.070 0.070
9 -0.089 -0.090
6 -0.048 -0.048
12 0.061 0.061
LOGLINEAR(data = datasets$Stevens_2009_Inf_Survival,
data_type = 'counts',
variables=c('Clinic', 'Care', 'Survival'),
Freq = 'Freq')
The input data:
, , Survival = 1
Care
Clinic 1 2
1 3 4
2 17 2
, , Survival = 2
Care
Clinic 1 2
1 176 293
2 197 23
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 7 1066.428 0.00000 1035.836 0.00000
2 4 211.482 0.00000 199.646 0.00000
3 1 0.043 0.83524 0.044 0.83383
0 0 0.000 1.00000 0.000 1.00000
AIC
1108.610
259.665
54.226
56.183
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 3 854.946 0.00000 836.191 0.00000
2 3 211.439 0.00000 199.602 0.00000
3 1 0.043 0.83524 0.044 0.83383
AIC diff.
848.946
205.439
-1.957
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Clinic:Care 188.081 1 0 186.081
3 Clinic:Survival 12.173 1 0.00048 10.173
4 Care:Survival 0.039 1 0.84338 -1.961
5
6 Clinic 80.064 1 0 78.064
7 Care 7.062 1 0.00787 5.062
8 Survival 767.82 1 0 765.82
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p
(Intercept) 3.229 0.126 25.557 0.00000
Clinic1 0.174 0.126 1.376 0.16885
Care1 0.414 0.126 3.279 0.00104
Survival1 -1.595 0.126 -12.626 0.00000
Clinic1:Care1 -0.604 0.126 -4.783 0.00000
Clinic1:Survival1 -0.429 0.126 -3.397 0.00068
Care1:Survival1 0.009 0.126 0.074 0.94131
Clinic1:Care1:Survival1 0.055 0.126 0.435 0.66330
CI_lb CI_ub
(Intercept) 2.982 3.477
Clinic1 -0.074 0.422
Care1 0.167 0.662
Survival1 -1.843 -1.348
Clinic1:Care1 -0.852 -0.357
Clinic1:Survival1 -0.677 -0.182
Care1:Survival1 -0.238 0.257
Clinic1:Care1:Survival1 -0.193 0.303
Backward Elimination Statistics:
Step GenDel Effects LR_Chi_Square df
0 Generating Class Clinic:Care:Survival 0 0
Deleted Effect Clinic:Care:Survival 0.043 1
1 Generating Class All of these terms: 0.043 1
Clinic
Care
Survival
Clinic:Care
Clinic:Survival
Care:Survival
Deleted Effect Test Clinic:Care 188.081 1
Deleted Effect Test Clinic:Survival 12.173 1
Deleted Effect Test Care:Survival 0.039 1
Deleted On This Step Care:Survival
2 Generating Class All of these terms: 0.082 2
Clinic
Care
Survival
Clinic:Care
Clinic:Survival
Deleted Effect Test Clinic:Care 193.654 1
Deleted Effect Test Clinic:Survival 17.746 1
Deleted On This Step none deleted
p AIC
1 56.183
0.83524 54.226
0.83524 -1.957
0 186.081
0.00048 10.173
0.84338 -1.961
0.95969 -3.918
0 191.654
3e-05 15.746
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Clinic + Care + Survival + Clinic:Care + Clinic:Survival
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
2 0.082 0.95969 0.084 0.95905 52.265
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.968 0.383 2.530 0.011
Clinic2 1.866 0.447 4.178 0.000
Care2 0.506 0.095 5.351 0.000
Survival2 4.205 0.381 11.042 0.000
Clinic2:Care2 -2.653 0.232 -11.458 0.000
Clinic2:Survival2 -1.756 0.450 -3.904 0.000
Cell Counts and Residuals:
Clinic Care Survival Obsd. Freq. Exp. Freq. Residuals
1 1 1 1 3 2.632 0.368
5 1 1 2 176 176.368 -0.368
3 1 2 1 4 4.368 -0.368
7 1 2 2 293 292.632 0.368
2 2 1 1 17 17.013 -0.013
6 2 1 2 197 196.987 0.013
4 2 2 1 2 1.987 0.013
8 2 2 2 23 23.013 -0.013
Std. Resid. Adjusted Resid.
1 0.227 0.222
5 -0.028 -0.028
3 -0.176 -0.178
7 0.021 0.021
2 -0.003 -0.003
6 0.001 0.001
4 0.009 0.009
8 -0.003 -0.003
LOGLINEAR(data = datasets$TabFid_2019_small,
data_type = 'counts',
variables=c('Profession', 'Sex', 'Reading_type'),
Freq = 'Freq' )
The input data:
, , Reading_type = SCIFI
Sex
Profession Female Male
Administrators 5 10
Belly Dancers 10 5
Politicians 10 15
, , Reading_type = SPY
Sex
Profession Female Male
Administrators 10 30
Belly Dancers 25 5
Politicians 15 15
K - Way and Higher-Order Effects
K df LR Chi-Square p Pearson Chi-Square p
1 11 48.089 0.00000 52.097 0.00000
2 7 33.353 0.00002 32.994 0.00003
3 2 1.848 0.39695 1.922 0.38249
0 0 0.000 1.00000 0.000 1.00000
AIC
101.139
94.402
72.897
75.049
These are tests that K - Way and Higher-Order Effects are zero, i.e., tests
of the hypothesis that Kth-order and higher interactions are zero
If these effects and all higher order effects are removed from the model,
then here are the consequences.
The df values indicate the number of effects (model terms) that are removed.
The first row, labeled as 1, shows the consequences of removing all of the main
effects and all higher order effects (i.e., everything) from the model. This
usually results in poor fit. A statistically significant chi-square indicates
that the prediction of the cell frequencies is significantly worse than the
prediction that is provided by the saturated model. It would suggest that at
least one of the removed effects needs to be included in the model.
The second row, labeled as 2, shows the consequences of removing all of the
two-way and higher order effects from the model, while keeping the main effects.
A statistically significant chi-square indicates a reduction in prediction success
compared to the saturated model and that at least one of the removed effects needs
to be included in the model.
The same interpretation process applies if there is a K = 3 row, and so on.
A K = 3 row in the table would show the consequences of removing all of the
three-way and higher order effects from the model, while keeping the two-way
interactions and main effects.
A nonsignificant chi-square for a row would indicate that removing the
model term(s) does not significantly worsen the prediction of the cell
frequencies and the term(s) is nonessential and can be dropped from the model.
The bottom row in the table, labeled as 0, is for the saturated mode. It
includes all possible model terms and therefore provides perfect prediction
of the cell frequencies. The AIC values for this model can be helpful in
gaging the relative fit of models with fewer terms.
K-Way Effects
K df LR Chi-Square p Pearson Chi-Square p
1 4 14.737 0.00528 19.103 0.00075
2 5 31.505 0.00001 31.071 0.00001
3 2 1.848 0.39695 1.922 0.38249
AIC diff.
6.737
21.505
-2.152
These are tests that the K - Way Effects are zero, i.e., tests whether
interactions of a particular order are zero. The tests are for model
comparisons/differences. For each K-way test, a model is fit with and then
without the interactions and the change/difference in chi-square and
likelihood ratio chi-square values are computed.
For example, the K = 1 test is for the comparison of the model with
all main effects and the intercept with the model with only the intercept.
A statistically significant K = 1 test is (conventionally) considered to
mean that the main effects are not zero and that they are needed in the model.
The K = 2 test is for the comparison of the model with all two-way
interactions, all main effects, and the intercept with the model with
the main effects, and the intercept. A statistically significant K = 2 test
is (conventionally) considered to mean that the two-way interactions are
not zero and that they are needed in the model.
The K = 3 test (if there is one) is for the comparison of the model
with all three-way interactions, all two-way interactions, all main
effects, and the intercept with the model with all two-way interactions,
all main effects, and the intercept. A statistically significant K = 3 test
is (conventionally) considered to mean that the three-way interactions
are not zero and that they are needed in the model, and so on.
The df values for the model comparisons are the df values associated
with the K-way terms.
The above "K - Way and Higher-Order Effects" and "K - Way" tests are for the
ncollective importance of the effects at each value of K. There are not tests
nof individual terms. For example, a significant K = 2 test means that the set
nof two-way terms is important, but it does not mean that every two-way term is
significant.
Partial Associations:
Effect LR.Chi.Square df p AIC.diff.
1
2 Profession:Sex 27.122 2 0 23.122
3 Profession:Reading_type 4.416 2 0.10993 0.416
4 Sex:Reading_type 0.621 1 0.43078 -1.379
5
6 Profession 1.321 2 0.51661 -2.679
7 Sex 0.161 1 0.68795 -1.839
8 Reading_type 13.255 1 0.00027 11.255
These are tests of individual terms in the model, with the restriction that
higher-order terms at each step are excluded. The tests are for differences
between models. For example, the tests of 2-way interactions are for the
differences between the model with all 2-way interactions (and no higher-order
interactions) and the model when each individual 2-way interaction is removed in turn.
Parameter Estimates (SPSS "Model Selection", not "General", Parameter Estimates):
For saturated models, .500 has been added to all observed cells:
Estimate Std. Error z value p
(Intercept) 2.450 0.091 26.928 0.00000
Profession1 0.006 0.129 0.050 0.96042
Profession2 -0.200 0.137 -1.464 0.14310
Sex1 0.007 0.091 0.072 0.94296
Reading_type1 -0.249 0.091 -2.738 0.00617
Profession1:Sex1 -0.435 0.129 -3.363 0.00077
Profession2:Sex1 0.539 0.137 3.944 0.00008
Profession1:Reading_type1 -0.179 0.129 -1.385 0.16599
Profession2:Reading_type1 0.027 0.137 0.200 0.84146
Sex1:Reading_type1 -0.071 0.091 -0.785 0.43245
Profession1:Sex1:Reading_type1 0.176 0.129 1.364 0.17258
Profession2:Sex1:Reading_type1 -0.150 0.137 -1.101 0.27080
CI_lb CI_ub
(Intercept) 2.272 2.628
Profession1 -0.247 0.260
Profession2 -0.468 0.068
Sex1 -0.172 0.185
Reading_type1 -0.427 -0.071
Profession1:Sex1 -0.688 -0.181
Profession2:Sex1 0.271 0.806
Profession1:Reading_type1 -0.433 0.074
Profession2:Reading_type1 -0.240 0.295
Sex1:Reading_type1 -0.250 0.107
Profession1:Sex1:Reading_type1 -0.077 0.430
Profession2:Sex1:Reading_type1 -0.418 0.117
Backward Elimination Statistics:
Step GenDel Effects
0 Generating Class Profession:Sex:Reading_type
Deleted Effect Profession:Sex:Reading_type
1 Generating Class All of these terms:
Profession
Sex
Reading_type
Profession:Sex
Profession:Reading_type
Sex:Reading_type
Deleted Effect Test Profession:Sex
Deleted Effect Test Profession:Reading_type
Deleted Effect Test Sex:Reading_type
Deleted On This Step Sex:Reading_type
2 Generating Class All of these terms:
Profession
Sex
Reading_type
Profession:Sex
Profession:Reading_type
Deleted Effect Test Profession:Sex
Deleted Effect Test Profession:Reading_type
Deleted On This Step Profession:Reading_type
3 Generating Class All of these terms:
Profession
Sex
Reading_type
Profession:Sex
Deleted Effect Test Profession:Sex
Deleted On This Step none deleted
4 Generating Class All of these terms:
Profession
Sex
Reading_type
Profession:Sex
Deleted Effect Test Reading_type
Deleted On This Step none deleted
LR_Chi_Square df p AIC
0 0 1 75.049
1.848 2 0.39695 72.897
1.848 2 0.39695 -2.152
27.122 2 0 23.122
4.416 2 0.10993 0.416
0.621 1 0.43078 -1.379
2.469 3 0.48099 -3.531
26.795 2 0 22.795
4.089 2 0.12944 0.089
6.558 5 0.25567 -3.442
26.795 2 0 22.795
6.558 5 0.25567 -3.442
13.255 1 0.00027 11.255
The hierarchical backward elimination procedure begins with all possible
terms in the model and then removes, one at a time, terms that do not
satisfy the criteria for remaining in the model.
A term is dropped only when it is determined that removing the term does
not result in a reduction in model fit AND if the term is not involved in any
higher order interaction. On each Step above, the focus is on the term that results
in the least-significant change in the likelihood ratio chi-squre if removed.
If the change is not significant, then the term is removed.
The Final Model Formula:
Freq ~ Profession + Sex + Reading_type + Profession:Sex
The Final Model Goodness-of-Fit Tests:
df LR Chi-Square p Pearson Chi-Square p AIC
5 6.558 0.25567 6.586 0.25331 71.607
Generalized Linear Model Coefficients for the Final Model:
Estimate Std. Error z value
(Intercept) 1.672 0.280 5.971
ProfessionBelly Dancers 0.847 0.309 2.746
ProfessionPoliticians 0.511 0.327 1.564
SexMale 0.981 0.303 3.240
Reading_typeSPY 0.598 0.168 3.561
ProfessionBelly Dancers:SexMale -2.234 0.469 -4.759
ProfessionPoliticians:SexMale -0.799 0.406 -1.966
Pr(>|z|)
(Intercept) 0.000
ProfessionBelly Dancers 0.006
ProfessionPoliticians 0.118
SexMale 0.001
Reading_typeSPY 0.000
ProfessionBelly Dancers:SexMale 0.000
ProfessionPoliticians:SexMale 0.049
Cell Counts and Residuals:
Profession Sex Reading_type Obsd. Freq. Exp. Freq.
1 Administrators Female SCIFI 5 5.323
7 Administrators Female SPY 10 9.677
4 Administrators Male SCIFI 10 14.194
10 Administrators Male SPY 30 25.806
2 Belly Dancers Female SCIFI 10 12.419
8 Belly Dancers Female SPY 25 22.581
5 Belly Dancers Male SCIFI 5 3.548
11 Belly Dancers Male SPY 5 6.452
3 Politicians Female SCIFI 10 8.871
9 Politicians Female SPY 15 16.129
6 Politicians Male SCIFI 15 10.645
12 Politicians Male SPY 15 19.355
Residuals Std. Resid. Adjusted Resid.
1 -0.323 -0.140 -0.141
7 0.323 0.104 0.103
4 -4.194 -1.113 -1.176
10 4.194 0.826 0.805
2 -2.419 -0.687 -0.711
8 2.419 0.509 0.500
5 1.452 0.771 0.725
11 -1.452 -0.572 -0.595
3 1.129 0.379 0.371
9 -1.129 -0.281 -0.285
6 4.355 1.335 1.256
12 -4.355 -0.990 -1.031