ASSIGNMENT 2

Meade Brooks

February 11, 2001

Ross, Jonathan, Schulze, Robert. (1999). Can Computer-Aided Instruction Accommodate All Learners Equally? British Journal of Educational Technology 30(1), 5-24.

PRINCIPLE: This exploratory study investigated the impact of learning styles on human-computer interaction. Seventy learners who were enrolled in a large urban post-secondary institution participated in the study. The Gregorc Style Delineator was used to obtain subjects' dominant learning style scores (concrete sequential, concrete random, abstract sequential and abstract random). The study found that learning styles significantly affected learning outcomes, and an interaction effect between dominant learning style and achievement scores was revealed.

DESIGN: To investigate differences between participants, learning style groups received the same treatment, which was training on the one-rescue CPR procedure. The entire experimental sessions took two hours to complete for each of 4 groups consisting of approximately 15 participants. One hour was devoted to assessing and interpreting learning style scores; the second hour was dedicated to the CAI (Computer-Aided Instruction) session on CPR. A factorial experimental design (nonconfounding) was implemented based on the conceptual framework of the one-group pretest-posttest design (O1 X O2). A pre-test and post-test comprised of 20 CPR related questions were administered to each subject. The test-retest reliability alpha coefficient was 0.86 for the pre-test and 0.89 for the post-test. To explore whether learning outcomes were influenced by dominant learning styles groups, a two-way ANOVA (2 x 4 factorial analysis, unequal n) was conducted. The data revealed a statistically significant main effect for the pre-test and post-test means, with the abstract sequential group displaying the highest gain. There was also a significant interaction between learning style and learning outcome. Therefore, it would appear that dominant learning styles affected the magnitude and direction of the differences in the pre-test and post-test results. Further factorial analysis was done regarding the nature of the participants interaction with the computer tutorial. A MANOVA was conducted using six patterns of learning (based on subject CAI navigation during tutorial) as the dependent variables, and dominant learning style as the independent variable. Results suggested no significant effects for patterns of learning by dominant learning style.

FACTORS JEOPARDIZING INTERNAL VALIDITY: An ANCOVA was conducted to identify the influences of learning style on post-test scores, while controlling for differences in pre-test knowledge demonstrated by the four learning groups. The ANCOVA showed a significant effect for pre-test, however, learning styles still retained a significant influence on post-test scores. The adjusted R2 value of 0.52 suggested that dominant learning styles explained 52% of the variance in post-test scores, after controlling for the influences of pre-test scores. The degree of unexplained variance, 1- R2 = 48%, indicates another variable may be confounded with learning style.

FACTORS JEOPARDIZING EXTERNAL VALIDITY: There may have been a reactive or interaction effect with the pre-test, making generalization of the results difficult. However, since true random sampling could not be done realistically in this situation, a pre-test was necessary.

ADEQUACY OF STATISTICAL PROCEDURES USED: The author did not mention checking to see if the data met requirements for performing an analysis of variance (normally distributed, no heteroscadicity, linear relationships). Random sampling was not possible in this experiment.

LOGIC SUMMARY: The factorial design of this experiment was sufficient to determine if learning outcomes were influenced by dominant learning style regarding computer-aided instruction. The interaction effects are specificity-of-effects rules and are relevant to generalization efforts.

DESIGN IMPROVEMENT: The experiment needs to be repeated at a different location with more participants, randomly selected. The actual design seems fine.

EXTENSION OF THE STUDY: This study shows that not everything works for all people. Although technology can greatly aid instruction/education, it must be tailored to the unique needs of each individual. Because we are all different, not one "type" of technology will meet the needs of al people.