Assignment #2 - Article Critique

Due Date: February 21

Value: 15 points total


  • Post to the forum containing the written part of the assignment.


Locate and critique a new and different quantitative article related to the impact of technology on education.

The article/document needs to be either a random (true) or pseudo-random experiment.

No qualitative, case studies, or other non-postivistic research.

Once you find an article you need to:

  1. Write a one page critique based on the guidelines provided below.
  2. Upload of the summary posted to the assignment forum.
  3. Reply to at least two postings in a meaningful manner.

Guide for the Written Critique of an Experimental Study

Reference: APA format

Principle: Concise statement of relationship being investigated. May differ from author, especially if you see study related to an area which is not the primary focus of the study. Note that principle is not a hypothesis but the broader idea being tested by the specific hypothesis of the study.

Research Question and/or Hypothesis: What are the research hypothesis being examined.

Design: Diagrammatic representation of the experimental groups. Indicate random assignment or not.

Population and/or Sample: Number of subjects in each group (control and treatment).

Random Sample: Describe the selection method of the sample. Is it random or quasi-experimental?

Independent and Dependent Variables: Summary statement of treatments. Define operationally. May not agree with author's definition. Be accurate and concise.

Factors Jeopardizing Internal Validity: Identify type of factor, but be specific as to probable examples of the factors identified which operate in the study.

Adequacy of Statistical Procedures Used: Identify any inappropriate use of statistics to compare differences. Suggest improvements.

Results: Graphic representation of differences observed. Be concise. Represent only finding relevant to the principle. Principle may not be primary focus of author hence secondary findings may be appropriate for your purposes. Indicate significant differences. A copy of analysis of variable table or chi-square figures are not results.

Briefly Summarize Logic (Inductive and/or Deductive): What line of reasoning is used to tie empirical results to the principle being investigated? Be concise. What assumptions are unstated? Are they reasonable?

Design Improvement: How could the design be modified to more adequately test hypothesis?

Comments: This should be a statement relating study to other studies, citing why you thought it important or interesting, or any other information of use to you as you review summary at a later time.

Extension of the study: What additional studies could be designed to extend the idea? Diagram and summarize very briefly.


  1. Entire critique should fit on a single sheet - one side.
  2. Be concise. Draw pictures. Don't just copy statements. Abbreviate but don't lose meaning.
  3. Summarize study for your purposes, not authors'. Principle may be different from authors. Study may test several principles while your interest is in only one. Auxiliary results may be interesting but unnecessary.
  4. If you see study as related to principle in an unusual way, not obvious to casual reader, be sure to summarize reasoning in comments.
  5. Author's conclusions may be useful but usually are not. Put emphasis in demonstrated differences (empirical relationships) not on author's conclusions.
  6. Remember this is a critique. Try to represent exactly the issues of concern.


Critique of an Experimental Study

Reference: Whalen, T. & Wright, D. (1999). Powell, V., Jack (2000). Effects of Successmaker Computerized Curriculum on the Behavior of Disruptive Students. Journal Educational Technology Systems, Vol. 28(4) 335-347.

Principle: This study monitored the effects of CBI on the students' psychosocial outcomes (self-esteem, depression and locus of control) and academic outcomes.

Type of Design: The ninety-four subjects ages ten to seventeen were identified as chronically disruptive and assigned to an urban Alternative Education Program (AEP) were the control group. The one group pre-test-post test design (O X O) best represents the design of the study. The Rosenberg Self-Esteem Scale, Depression Self- Rating Scale, and Nowichk-Strickland Scale were used to measure behavioral patterns. The academic outcomes were measured via grade point averages and attendance records.

Factor Jeopardizing Internal Validity: History-due to the students' immense social issues, Testing- test anxiety, Mortality- several students did not complete study.

Factors Jeopardizing External Validity: Statistical regression- groups score very high or low on pretest, Reactive effects of experimental arrangements- new surroundings.

Adequacy of Statistical Procedures Used: The t-test of psychosocial and academic measures and the percentages covering the Success/Maker were both adequate statistical measures.

Briefly Summarize Logic (Inductive and or Deductive): For one hundred eighty days students used the SuccessMaker Computized Curriculum. The only test that showed a significant increase was the self-esteem test. The students' academic achievements decreased during the students' assignment to the AEP, which contradicts typical CBI findings.

Design Improvement: Study should increase pool to include more students from various regions. The selected pool of students was very limited and unstable.

Extension of study: An additional study should be conducted to see whether the students' self esteem changes were long term or short term. Also why didn't the students' academic performance improve with the use of CBI?

Critique of an Experimental Study

Reference: Atkinson, Stephanie E. "Key Factors Influencing Pupil Motivation in Design and Technology," Journal of Technology Education, 10(2), 2000.

Principle: This article examines the relationship between internal and external factors and a pupil's ability to perform and be motivated. The internal and external factors: pupil performance in design and technology project work, pupil skills associated with design and technology project work, pupil personal goal orientation, pupil cognitive style, pupil creativity, teaching strategy, and teacher motivation.

Type of Design: The descriptive analysis included percentage distribution, rank, order, one sample chi-square test of variance, unpaired comparison of averages using t-tests, chi-square test for independence, and Fisher's Exact Test for 2X2 tables. The experimental design that best fits this study is the Posttest-Only Control Group design.

Factors Jeopardizing Internal Validity: Only 5% were enthusiastic about the process in which they were involved. We do not know if the project work played a part in the internal validity of the study. Maybe they didn't like the project work.

Factors Jeopardizing External Validity: Teacher's motivation, strategies, presentation style play a big part in the student's performance. In addition, the project lasted for a year in some instances. Boredom could have been a factor in student performance.

Adequacy of Statistical Procedures Used: The author used multiple strategies to arrive at the statistical data. The descriptive analysis included percentage distribution, rank, order, one sample chi-square test of variance, unpaired comparison of averages using t-tests, chi-square test for independence, and Fisher's Exact Test for 2X2 tables.

Briefly Summarize Logic (Inductive and/or Deductive): This research determined that:

  1. 60% of the students were unmotivated.
  2. 22% did not enjoy the activity.
  3. The prescriptive way the activity was adopted played a major role in stifling the creativity of the students.
  4. Teacher deadlines and delivery method of teaching strategies also played a major role in the large percentage of unmotivated students and level of their performance.

Design Improvement: The author suggests that holistic assessment procedures would have been more appropriate flexible design process models. He also goes on to state that teacher's need to develop strategies that will guide students through the process and create a partnership where ownership is a joint affair.

Extension of the Study: This experiment could be repeated with modifications in three areas.

  1. Completion time limited to a few weeks instead of a yearlong program.
  2. Consistent between the selected schools concerning the methods of delivery.
  3. Time allowed for the completion of the project – some were short while others lasted a year.

Critique of an Experimental Study

Reference: Ross, Jonathan, Schulze, Robert. (1999). Can Computer-Aided Instruction Accommodate All Learners Equally? British Journal of Educational Technology 30(1), 5-24.

PRINCIPLE: This exploratory study investigated the impact of learning styles on human-computer interaction. Seventy learners who were enrolled in a large urban post-secondary institution participated in the study. The Gregorc Style Delineator was used to obtain subjects' dominant learning style scores (concrete sequential, concrete random, abstract sequential and abstract random). The study found that learning styles significantly affected learning outcomes, and an interaction effect between dominant learning style and achievement scores was revealed.

DESIGN: To investigate differences between participants, learning style groups received the same treatment, which was training on the one-rescue CPR procedure. The entire experimental sessions took two hours to complete for each of 4 groups consisting of approximately 15 participants. One hour was devoted to assessing and interpreting learning style scores; the second hour was dedicated to the CAI (Computer-Aided Instruction) session on CPR. A factorial experimental design (nonconfounding) was implemented based on the conceptual framework of the one-group pretest-posttest design (O1 X O2). A pre-test and post-test comprised of 20 CPR related questions were administered to each subject. The test-retest reliability alpha coefficient was 0.86 for the pre-test and 0.89 for the post-test. To explore whether learning outcomes were influenced by dominant learning styles groups, a two-way ANOVA (2 x 4 factorial analysis, unequal n) was conducted. The data revealed a statistically significant main effect for the pre-test and post-test means, with the abstract sequential group displaying the highest gain. There was also a significant interaction between learning style and learning outcome. Therefore, it would appear that dominant learning styles affected the magnitude and direction of the differences in the pre-test and post-test results. Further factorial analysis was done regarding the nature of the participants interaction with the computer tutorial. A MANOVA was conducted using six patterns of learning (based on subject CAI navigation during tutorial) as the dependent variables, and dominant learning style as the independent variable. Results suggested no significant effects for patterns of learning by dominant learning style.

FACTORS JEOPARDIZING INTERNAL VALIDITY: An ANCOVA was conducted to identify the influences of learning style on post-test scores, while controlling for differences in pre-test knowledge demonstrated by the four learning groups. The ANCOVA showed a significant effect for pre-test, however, learning styles still retained a significant influence on post-test scores. The adjusted R2 value of 0.52 suggested that dominant learning styles explained 52% of the variance in post-test scores, after controlling for the influences of pre-test scores. The degree of unexplained variance, 1- R2 = 48%, indicates another variable may be confounded with learning style.

FACTORS JEOPARDIZING EXTERNAL VALIDITY: There may have been a reactive or interaction effect with the pre-test, making generalization of the results difficult. However, since true random sampling could not be done realistically in this situation, a pre-test was necessary.

ADEQUACY OF STATISTICAL PROCEDURES USED: The author did not mention checking to see if the data met requirements for performing an analysis of variance (normally distributed, no heteroscadicity, linear relationships). Random sampling was not possible in this experiment.

LOGIC SUMMARY: The factorial design of this experiment was sufficient to determine if learning outcomes were influenced by dominant learning style regarding computer-aided instruction. The interaction effects are specificity-of-effects rules and are relevant to generalization efforts.

DESIGN IMPROVEMENT: The experiment needs to be repeated at a different location with more participants, randomly selected. The actual design seems fine.

EXTENSION OF THE STUDY: This study shows that not everything works for all people. Although technology can greatly aid instruction/education, it must be tailored to the unique needs of each individual. Because we are all different, not one "type" of technology will meet the needs of al people.

Critique of an Experimental Study

Reference: Hruskocy, Carole, Cennamo, Katherine S., Ertmer, Peggy A., and Johnson, Tristan. "Creating a Community of Technology Users: Students Become Technology Experts for Teachers and Peers," Journal of Technology and Teacher Education, 8:(15 pgs.), 2000.

Principle: A "bottom-up" approach to technology training in the elementary school setting. Training a key group of students to become computer experts in specific technologies, thus giving teachers and peers ready access to a technology help desk or support system.

Type of Design: The Solomon Four-Group Design (R O X O) A Post-test Only Design.

The effects of testing and the interactions of testing will be examined over the course of the school year. The control group being the students in training. The project was designed to create a collaborative community of learners. Three groups are used: six graduate students and their professor; the school community consisting of 10 teachers (2 teachers from each grade 1-5), their students, and the librarian. Elementary students are given specialized technology training each week to serve as technology experts in the classroom and support the integration of computers into the classroom for an entire school year. In the first semester, the graduate students and professor gave students hands-on training on specific learning stations each week in half-hour sessions. The learning stations introduced students to the Library Browser, ClarisWorks, CD ROM software, and XapShop cameras. Every week students had a "take-home" product completed from each learning station. Every two weeks the university team would meet with the teachers and librarian to discuss the technology activities. During the second semester, eight students were selected to receive intensive technical training to serve as computer experts in the classroom. Also, each class was assigned a project that allowed them to integrate and apply the skills they learned in the first semester. Teachers were invited to attend each weekly class sessions and given one after-school-training lesson on the computers.Factors Jeopardizing Internal Validity: History of the teachers: fear and confidence levels regarding technology. The teachers attitude toward their traditional teaching role. Teachers lack of involvement in the technology implementation efforts.

Factors Jeopardizing External Validity: Not enough computers to go around. Not enough "tech time" 30 minutes per desired hour long sessions. Need for more instructional personnel and more instructional software.

Adequacy of Statistical Procedures Used: Data collected through student/ teacher interviews, inventories, and technology surveys. A qualitative analysis of the responses was conducted, looking for general patterns and themes from the individuals. The university team and librarian prepared a reflective paper at end of year. Teacher and principal answered an open/ended set of questions.

Briefly Summarize Logic (Inductive and/or Deductive): The researches determined that

  1. Teachers more motivated to learn and use technology
  2. Teachers implemented technology more in class
  3. Elementary students more motivated to learn and use technology
  4. Students more comfortable with technology
  5. Students learned a new way of learning
  6. Learned the value of collaboration.

Design Improvement: Provide more computers during "tech time" and more instructional software. Sessions with elementary students needs to be longer. More teaching personnel and teacher involvement needed. Conduct an hour interaction with teachers each month to generate new ideas for help in transfer of technology skills in the classroom.

Extension of the Study: The students, librarian, and teachers provide the instruction of the learning stations for the next school year. University team arrange for teacher in-service days (workshops) to provide training and involvement from teachers. An additional after-school technology training program was started with a group of 24 students to receive extra training. This training continues with one hour sessions per week after school. Meetings held once a month with university team and school personnel to accomplish and achieve goals.

Critique of an Experimental Study

Reference: Clark, M. C. (2000). The effect of video-based interventions on self-care. Western Journal of Nursing Research, 22 895-911

Principle: This study examines the effectiveness of video-based intervention training on caregivers for elderly patients

Type of Design: This is a quantitative study using a pretest and two posttests. Three groups are studied: a video-only session on self care for caregivers, a video/discussion session on self care for caregivers, and a control group trained on other material. The members of each group were randomly selected from a group of 97 caregivers. Each participant completed surveys prior to training, immediately after the training, and 6-8 weeks afterward. The surveys measured the Self-Care Behaviors being absorbed by the caregivers. While this structure does not match any experimental design exactly, it appears to be a variant of the Pretest-Posttest Control Group Design.

		R	O1	X1	O2	O3
		R	O4	X2	O5	O6
		R	O7		O8	O9

Factors Jeopardizing Internal Validity: Testing and instrumentation can be sources of error with this study. The surveys were self-reported, which introduces a problem with testing. Prior to the training, a caregiver may have had a more general concept of self-care. After the training, they would have a more concrete example, but the first test is still based on existing knowledge and may have been inflated/deflated depending on respondent bias. The instrumentation may have been a problem when one considers the change in the participants after 8 weeks of caregiving. There is no mention of the tenure of the caregivers, which may be a source of error if the person is new and is starting to become tired with unexpected duties with the elderly patient.

Factors Jeopardizing External Validity: Reactive effects of experimental arrangements seem to be the largest risk of error. The caregivers studies are working with elderly patients in the United States. The authors must be careful not to apply these results to caregivers working with different age or cultural groups.

Adequacy of Statistical Procedures Used: The authors used the ANOVA and means comparison tests to discover significant differences among the groups.

Brief Summary of Logic (Inductive/Deductive): The logic behind the study is that video-based training allows the caregiver to step out of their existing routine and identify with someone who has similar concerns as them. While there is not a clear answer as to why the discussion-video group did not have the long term positive effects of the video group, both programs showed improvement in caregiver self-care attitudes.

Design Improvement: The control group for this study does not seem sufficient. While this group was watching a program unrelated to self-care, it would be better to offer no training to the control group.

Extension of the Study: This study can be extended by applying the experimentation to other types of caregivers in relation to age and cultural group.

Critique of an Experimental Study

Reference: Fuller, H. (2000) First Teach Their Teachers: Technology Support and Computer Use In Academic Subjects The Journal of Research on Computing in Education;

Principle: In First Teach Their Teachers: Technology Support and Computer Use In Academic Subjects by Hester Fuller, presented in the Summer 2000 issue of The Journal of Research on Computing in Education, the author examines the following research questions:

  1. Do students report a higher incidence of computer use in subjects in schools where the computer coordinator devotes more time to supporting teachers as a user group or students as a user group?
  2. Do students exhibit a higher incidence of school computer use in academic subjects in schools where the computer coordinator devotes more time to system maintenance, user training for students or teachers, troubleshooting, selecting materials for teacher use, writing or adapting software for use, and developing integration lesson plans?

Type of Design: The author attempted to portray this as an experimental design but I saw no evidence of a control group. Instead data is compared to national "norms" although the author freely admits that at least in terms of SES, this is not a normal population.

Abstract: Her methodology for examining the research questions involved the utilization of existing data gathered from the second implementation of the IEA CompEd Study with a sample size of 3,805 (n=3,805) students in 167 grade 5 and 11 classrooms. Roughly 4% of classes (n=225) were set aside for lack of information or identifying characteristic. This sample consisted of a roughly even number of males and females with a preponderance of white (70%) students. From the SES indicators available this sample was slightly more affluent than the national average. The author regressed the above utilizations of coordinator time as the independent variable while maintaining average minimum student use as the dependent variable. The results of the study indicated that by utilizing a fitted models approach to the data analysis there was a significant impact upon the dependent variable p=.04 for grade 5 only in response to question one and a significant impact upon the dependent variables in question two in terms of writing lesson plans with a p=.02 at the 11th grade level.

Factors Jeopardizing Internal Validity: No information was given about the instruments reliability or validity so it was difficult to determine internal validity. The presentation of information gave the impression of a "fishing expedition" which raised serious questions in my mind regarding validity.

Factors Jeopardizing External Validity: The group examined, while having a large sample size was not normed in terms of SES and other key indicators limiting the usefulness of this study.

Adequacy of Statistical Procedures Used: Multiple regressions were used as the primary means of obtaining the data with some type of modeling (which I did not understand) being the means by which conclusions were drawn.

Brief Summary of Logic (Inductive/Deductive): Certain aspects of a technology coordinators job might be more impacting upon student achievement than others. Note that she apparently assumes that having a technology coordinator will impact. This is something that I feel should not be a given.

Design Improvement: Simplify it and/or break it into several different studies with true populations that are generalizable. Provide adequate information about the instrument so we are not left hanging as to internal validity.

Extension of the Study: This study is already over extended. I think she should have started first with the question. Does having a technology coordinator impact instruction? From there the idea could have been narrowed etc.

Critique of an Experimental Study

Reference: Flowers, C., Hancock, D. & Joyner, R (2000). "Effects of Instructional Strategies and Conceptual Levels on Students' Motivation and Achievement in a Technology Course". Journal of Research and Development in Education. 33(3), 187-194.

Principle: The researchers in this study examined low and high conceptual-level students achievement and motivation in a college computer technology course. The students were exposed to direct and indirect styles of instruction.

Type of Design: This study is a design 4, Pretest-Posttest Control Group Design (according to the Campbell and Stanley textbook). Illustratively, the design looks like this:

               R  O1  X  O2
               R  O3  X  O4

Two groups were randomly assigned and exposed to either direct instruction or indirect instruction. The students were classified as either low or high conceptual students according to an assessment test. Achievement and motivation were measured using different types of evaluation tools.

Factors Jeopardizing Internal Validity: History; the students were randomly assigned and individually assessed. Maturation, instrumentation, and testing; each group was exposed to five weeks of instruction and the testing tools were evaluated to be valid. Statistical regression; mean scores of the participants. Selection; the students were randomly assigned. Mortality; regular attendance was a part of the study.

Factors Jeopardizing External Validity: The testing tools were considered valid through a review of the literature. Eighty-two percent of the sample group was female. All of the participants had elected to take the computer technology course.

Adequacy of Statistical Procedures Used: Means, standard deviations, and ANOVA's were calculated to evaluate the interactions.

Briefly Summarize Logic: According to the results of this study, the authors believe high conceptual level students demonstrated higher motivation and achievement levels using indirect instruction. As the authors write, these findings may not replicate to other fields of study. This technology course was technical in nature. Students in this course may be use to self-directed learning. Many computer skills can be self-taught. Therefore, the indirect style of instruction may seem natural to this sample group.

Design Improvement: The sample size should be larger than sixty-five participants. A gender difference may not have been considered. Eighty-two percent of the group was female. The achievement test needs more analysis. It was designed by the researchers and not tested for reliability prior to the study. The course objectives were used to develop the test questions. The questions should be reviewed to discuss gender and cultural issues. Also, a pen-and-pencil test may not be the best assessment tool in a technology course that uses a more "hands-on" approach. The assessment test may not match the mode of instruction.

Extension of the Study: This study could be compared to similar studies in other academic fields to reveal if the findings are replicated. A study could be done to understand if "non-technical" students prefer indirect styles of instructions.

Critique of an Experimental Study

Reference: Navarro, Peter, Shoemaker, Judy (2000). Performance and Perceptions of Distance Learners in Cyberspace. The American Journal of Distance Education, 14(2), 15-35

Principle: The study look at the academic achievement and attitude of traditional learners verses cyberlearners.

Design: The study is a quasi-experimental design using a static group comparison.

This study was conducted at the University of California where 200 students enrolled in an introductory macroeconomics course and self-selected into two groups. One group took the class as an Internet course and the other group in the traditional method. There were 49 students who chose to take the class over the Internet while the remaining 151 took the class with traditional teaching methods. Among the cyberlearners 46 finished the study and 89 traditional students completed the study.

Independent Variables: The independent variable is the experimental group taking the class via the Internet instead of the most common method of instruction.

Dependent Variables: The dependent variables are the performance and perception of on-line students compared with the students in the classroom.

Procedures: There were two evaluation tools used. The first evaluation was in the form of an exam and was administered to both groups. All students took, part or all of, a two part attitudinal survey. Part A dealt with demographics as well as how the students felt about the course and was taken by all students. Part B was only administer to cyberlearners and consisted of questions relating to evaluation of the instructional technologies.

Results: On the final exam the cyberlearners scored significantly better then the traditional learners. A two-way analysis-of-variance test was used analyzed by gender, ethnicity, or class level. The final exam was scored and the mean score calculated with a t-test. The attitudinal survey was analyzed using a Chi-square test and showed that there was no significant difference between the two groups. Cyberlearners were asked, in a multiple choice format about the reason they took the class online. Forty-four percent of the students responded that is was due to convenience.

Factors Jeopardizing Internal Validity: A chi-square test of independence failed show any difference in the two groups are far as gender, ethnicity, or test scores but the study fails to tell the make up of the two groups.

Factors Jeopardizing External Validity: This study was conducted in one department at the University of California with and introduction class.

Design Improvement: This study would be greatly improved if they used a more realistic setting. The students who were taking the course on-line were closely monitored. The researcher had the on-line students take weekly test to ensure they were keeping up with the readings. Internet courses, in my experience, depend on the person being self motivate not monitored.

Extension of the Study: The study could be tried with randomly assigning students to each group.

Comments: I am interested in this study because I see many new teachers come in to DISD with extremely limited computer knowledge. The study indicated that the greatest impact on computer usage was whether or not the supervising teacher valued computers as an effective teaching tool.

Critique of an Experimental Study

Reference: Tan, S.C. (2000). The effects of incorporating concept mapping into computer assisted instruction. Journal of Educational Computing Research, 23, 113-131.

Principle: This study attempted to investigate the effect of two independent variables on achievement in organic chemistry. The first variable was a concept map as a navigation interface in a computer aided instruction program . A second independent variable was creating a concept map as a learning activity.

Design: This experiment was a posttest-only control group design

               R   X  O
               R   X  O
               R      O

The two different treatments were 1) using partial concept maps in CAI and constructing concept maps and 2) using complete concept maps in CAI and taking notes. The control was using menu selection in CAI and taking notes. The posttest was a chemistry achievement test which was 60 % high-level questions and 40 % low-level questions as classified by Bloom's taxonomy. In addition, each student was tested on their ability to create a concept map.

Factors jeopardizing internal validity: Most of the threats have been considered in this design, however the possibility of non-equivalent groups is always present in small samples. Also in rating the concept maps only one person judged the maps so there might be a problem with instrumentation because there was no check for inter-rater reliability.

Factors jeopardizing external validity: The main threat to external validity is the interaction between selection bias and the experimental variable. The sample was taken from a special school which serves some of the top 10% of the students in Singapore. Children with high IQs might react differently to concept maps than the average student.

Adequacy of statistical procedures used: The author correctly used an ANCOVA to statistically control for the effects of the extraneous variable, mid-year chemistry achievement. In addition, the Bonferroni adjustment technique to keep the Type I error in line with the stated alpha level was correctly used to determine the pairwise effects.

Logic summary: The author's logic is flawed. With the design combining the two independent variables, it is impossible to demonstrate whether it was the partial concept map, the concept mapping activity or the interaction between the two that improved achievement.

Design improvement: The author apologized for the lack of a fully crossed design (3 types of CAI interfaces X 2 types of learner's activities), but that would be the only way to accurately determine what is causing the increase in achievement. A more even ranking of the aspects of creating the concept map would also improve the operationalization of the variable.

Extension of the study: A fully crossed design should be used to accurately determine the effects of the two independent variables. Another way to test the use of concept maps on the CAI interface would be to have the links on the relationships instead of the concepts and see if that increased the achievement. A more representative sample would also increase the external validity.

Critique of an Experimental Study

Reference: Atkinson, Stephanie E. "Key Factors Influencing Pupil Motivation in Design and Technology," Journal of Technology Education, V. 10, #2 - Spring 1999.

Principle: The relationship that exist between pupil motivation and these internal and external factors: pupil performance in design and technology (D&T) project work, pupil skills associated with D&T project work, pupil personal goal orientation, pupil cognitive style, pupil creativity, teaching strategy, and teacher motivation.

Type of Design: Data for the study was collected throughout a GCSE design and technology course work project. A cognitive style questionnaire was given at the beginning of each academic year. A summative questionnaire, a goal orientation index, and a creativity test was given upon the completion of the project. The study examined the final year of a 4 year study of pupil performance in design and technology. A one sample chi-square test of variance, an unpaired comparison of averages using the t-tests, chi-square for independence and a Fisher's Exact Test for 2X2 tables. The design of the study would be X O.

Factors jeopardizing internal validity: One factor that would jeopardize the internal validity would be the History or lack thereof. The study was completed on the last year of the 4 year program. Maturation is also a factor in that the persons studied were at the end of the study and could have become quite bored with it. Also, the small size of the group studied hurts the internal validity.

Factors jeopardizing external validity: Multiple-treatment interference could effect external validity in that this study was conducted on the last year of a 4 year program of design and technology. The number of students (50) observed in the study could effect external validity.

Adequacy of statistical procedures used: The use of a four point scale to score the observation of the project work, observation sheets and the semi-structured informal interviews. A one sample chi-square test of variance, an unpaired comparison of averages using the t-tests, chi-square for independence and a Fisher's Exact Test for 2X2 tables.

Brief summarization of logic: The logic of the study is that motivation effects performance. How a student does on a project is by the persons motivation. The study determined that a large percentage of the students were unmotivated, (60%), 22% did not enjoy the activity. Deadlines and delivery method of strategies had a role in motivation and performance.

Design improvements: The study should have tracked the group from the year one to year four of the project instead of just the last year. A more valid comparison of factors affecting the groups motivation and performance then could have been tracked and studied. Data should have been gathered using a pre and post test along with the observation and interviews. This design is represented as

               O X O
                 X O

Extension of the study: The study could be conducted over the 4 year life of the project using different groups instead of just one. It could also be done with different groups from different regions of the country or different countries altogether.

Critique of an Experimental Study

Reference: Lyman, S. A., D. Williams, and L. Begnaud. "Using the Internet to Enhance the Study of Human Sexuality," ComputEd, 6:(5 pps.), 2000.

Principle: The Internet can effectively be used to enhance the study of human sexuality in higher education.

Type of Design: A course called Health and Sexuality was designed and presented as a web-based course that would be taught in the Spring 2000 semester in the Department of Kinesiology at the University of Louisiana at Lafayette. The course was an elective and was open to senior students. The students were given a choice of either taking the web-based course or taking the course by attending a formal class. There was a total of 40 students who signed up for the course, but 11 of the students opted for attending the formal class. The course utilized Blackboard, which is anon-line course delivery system. Discussion boards, E-mail, and chat room were utilized as means of communication. All course materials were obtained from the web. The students met with the professor for an orientation the first two weeks of class to teach them how to use Blackboard and to orient them to the computer. The students were examined by coming to a general class meeting and taking the multiple-choice exam together. Blackboard could issue an exam via the web, but taking the exam in a class was determined to be the design of this experimental course. Course requirements were the same for students who took the web-based course as they were for the students who came in for a formal class.

Factors Jeopardizing Internal Validity: One factor that would jeopardize internal validity would be the experimental mortality. Dropout of students from the course would narrow the sample and would question the validity of sample size. The interaction effects of selection biases and the experimental variable would be another factor that could jeopardize internal validity. The only testing would be a final exam. This does not give a beginning exam to compare success of the student against. Also, does a good score on the subject exam guarantee that the student has effectively learned via the Internet? The class would meet a week before the exam. Would this review meeting be the real indicator of how well the students do on their exams and would this not take away from the validity of the course by testing in a formal class versus over the Internet? A comparison would need to be done of the formal class compared to the experimental web-based course in order to statistically validate the success of the web-based course. Personal biases of the students toward computers could greatly affect the desirability of this course before it even gets started. Students should be very open-minded when undertaking this sort of course. Personality factors such as procrastination, etc. could greatly affect the success of this course. Students must be well discipline and self-disciplined to take such a course. Also, time alone can greatly influence a student's familiarity with performing the required computer tasks, and could be the factor that guarantees success of the student to the material.

Factors Jeopardizing External Validity: There are several factors that could jeopardize external validity. One would be that only college seniors were utilized for the experiment. Would the web-based course be effective for a sophomore or a junior student to take? Also, the setting of being at home or other off-site location can greatly affect the course work. It would be easy to generalize that the web-based course students would be at a greater disadvantage than the students taking the formal class.

Adequacy of Statistical Procedures Used: Questionnaires were given to the students and professor at the end of the course. This was the data that was used to determine how effective the experimental course was. Actually, what this helped determine was any changes that could be made to make the course more comfortable so that the advantages outweighed the disadvantages. The scores on the exams for the course material were not an indicative data measure. When, in fact, these scores should have been used as a comparison against the formal class (control group) to determine if the web-based course students performed at a higher, lower, or same level as the formal class students.

Briefly Summarize Logic (Inductive and/or Deductive): The logic of this experiment is that computer technology necessitates a change in the approach to education at the university level. Web-based, web-enhanced, and full web courses are a few of the changes in education that this logic has brought about. Logic has arisen stating that professors should be removed from the task of teaching academic subjects and free themselves for more social issues, such as advising, assisting, and listening to students who may be having problems in their lives that could keep them from attending college. This is the logic that surrounds the empirical design of this web-based course.

Design Improvement: The course should be announced as an entirely web-based course and have no group gatherings in a class. Also, all exams should be given on-line. That way a more valid comparison could be made against the control group that attends the formal class. Also, a separate formal class should be offered and not have the web-based course students determine afterward that they want to take a formal class. This would make the experimental sample more valid. Data should be analyzed as far as desirability of the web-based course versus the formal class. Also, exam performance should be compared between the web-based course and the formal class to help determine if the web-based course participants performed higher, lower, or the same as the control group. This group is the posttest-only control group design.

Pretests in the ordinary sense are impossible in this research, but a posttest is the most logical and convenient. The t test and covariance analysis and blocking on subject variables would be the appropriate statistical testing to use on this experimental design.

Extension of the Study: This study should go on for several years to determine the progression of web-based course as an effective means of teaching Health and Sexuality to college students.