Assignments

Assignment #1 - Article Summary and Review

Due Date: September 13, 2010

Value: 15 points total

Submission:

  • Post to the forum containing the written part of the assignment.

Description:

Locate and review a quantitative article related to the impact of technology on education.

The article/document needs to be either a random (true) or pseudo-random experiment.

No qualitative, case studies, or other non-postivistic research.

Once you find an article you need to:

  1. Write a one page summary based on the guidelines provided below.
  2. Post the summary to the assignment forum.
  3. Reply to at least two postings in a meaningful manner.

Audio discussion of examples:

Guide for Summarizing an Experimental Study

Reference: APA format

Principle: Concise statement of relationship being investigated. May differ from author, especially if you see study related to an area which is not the primary focus of the study. Note that principle is not a hypothesis but the broader idea being tested by the specific hypothesis of the study.

Research Question and/or Hypothesis: What are the research hypothesis being examined.

Design: Diagrammatic representation of the experimental groups. Indicate random assignment or not.

Population and/or Sample: Number of subjects in each group (control and treatment).

Random Sample?: Describe the selection method of the sample. Is it random or quasi-experimental?

Independent Variables: Summary statement of treatments. Define operationally. May not agree with author's definition. Be accurate and concise.

Dependent Variable: Measures to be compared. Define operationally. Be specific.

Procedures: Diagrammatic representation of steps or phases in data collection. If special apparatus is used, a note concerning its characteristics might be included here.

Results: Graphic representation of differences observed. Be concise. Represent only finding relevant to the principle. Principle may not be primary focus of author hence secondary findings may be appropriate for your purposes. Indicate significant differences. A copy of analysis of variable table or chi-square figures are not results.

Comments: This should be a statement relating study to other studies, citing why you thought it important or interesting, or any other information of use to you as you review summary at a later time.

Notes:

  1. Entire summary should fit on a single sheet - one side.
  2. Be concise. Draw pictures. Don't just copy statements. Abbreviate but don't lose meaning.
  3. Summarize study for your purposes, not authors'. Principle may be different from authors. Study may test several principles while your interest is in only one. Auxiliary results may be interesting but unnecessary.
  4. If you see study as related to principle in an unusual way, not obvious to casual reader, be sure to summarize reasoning in comments.
  5. Author's conclusions may be useful but usually are not. Put emphasis in demonstrated differences (empirical relationships) not on author's conclusions.
  6. Remember this is a summary, not critique. Try to represent exactly what happened.

Examples

Summary of an Experimental Study

Reference: Whalen, T. & Wright, D. (1999). Methodology for Cost-Benefit Analysis of Web-Based Tele-Learning: Case Study of the Bell Online Institute. The American Journal of Distance Education, 13 (1), 24-44.

Reference: This case study investigates the differences of cost pertaining to lessons taught in three different web-based learning platforms and the traditional classroom.

Design: All assignments are random and the number of subjects in each course are unknown.

Independent Variables: The treatments found in the case study are Web-based learning platforms including WebCT (asynchronous), Mentys (asynchronous), Pebblesoft (asynchronous), Symposium (synchronous) and the traditional classroom.

Dependent Variable: Fixed capital and variable operating costs of each type of learning environment are measured and compared. Fixed costs include use of server platform and content development, while the variable cost represents money spent for the time students and instructors spend using courses.

Procedures: During phase one instructional courses were developed for each learning environment and data pertaining to related costs were collected. In phase two of the case study variable costs were collected with the delivery of the various instructional courses.

Results: In conclusion, costs pertaining to web-based learning courses were lower per student than the costs of courses in a traditional classroom. Furthermore, the synchronous learning environment cost less to develop than the other web courses. This is because their were viewer graphics and no videos included in the presentation.

Comments: I thought this case study was interesting, because it showed that there are cost benefits to web-based learning. Cost is probably a main focus for Instructional technologists of any school district. Furthermore, I think it could be beneficial for elementary and middle school teachers to get together with other teachers in their districts to develop web-based lessons pertaining to his/her field of expertise and share them with each others classes.


Summary of an Experimental Study

Reference: Gokhale, A., Anuradha (1995). Collaborative Learning Enhances Critical Thinking. Journal of Technology Education. Retrieved from the World Wide Web: http://www.2/7/01.http://scholar.lib.vt.edu/ejournal/JTE/v7n1/gokhale.jte-v7n1.html

Principle: The study examined the effectiveness of collaborative learning while enhancing drill and practice skills and critical-thinking skills among college students as it relates to technology.

Type of Design: Subjects studied two undergraduate classes of students enrolled in an industrial technology course. Each class consisted of twenty-four students. One section of the class was randomly assigned to individual learning group while the other section was assigned to collaborative learning.

Independent variables: The treatment group consisted of two parts: lecture and worksheet.

Dependent variables: The pre- test was conducted to assess prior knowledge and the post-test was used to measure the treatment effect.

Procedure: Both groups were simultaneously introduced to a fifty-minute lecture. After hearing the lesson one section of students were randomly selected to participate in an individual learning group while the other section was assigned to the collaborative learning group. A worksheet, which consisted of both drill and practice and critical thinking items, was given to both groups. Individual learning: The students were given 30 minutes to answer the worksheet. After the allotted time was over students were given fifteen minutes to compare their answers to an answer sheet. Lastly they were administered a posttest. Collaborative learning: Students were given thirty minutes to solve the same worksheet and encouraged to discuss problem solutions. After the allotted time fifteen more minutes were given for group members to discussed their answers with other groups. Lastly they were administered a test post test.

Results: Students who participate in collaborative learning did better with the critical thinking. Each group did equally well on the drill and practice test.

Comments: The benefits of working on collaborative groups in this day and time are very beneficial especially in this day and time. More undergraduate students should have the opportunities to work in these setting.


Summary of an Experimental Study

Reference: Clay-Warner, J. and Marsh, K. "Implementing Computer Mediated Communication in the College Classroom," J. Educational Computing Research, 23(3), 257-274, 2000.

Principle: This article examines the use of computer-mediated communication (CMC) in the college classroom and suggests issues that other instructors should consider when implementing CMC into their classrooms.

Design: The research design is a survey questionnaire that is administered during the first week of the course and at the end of the term to assess student's experiences with LearnLink and their attitudes about computer aided instruction.

Independent Variables: Independent variables included demographic variable, the number of classes in which student had previously used Learn Link, and whether or not student reported having a LearnLink connection at their residence. Openness to using LearnLink and level of previous experience and use and interaction variables were also included.

Dependent Variables: Students' rating of their experience with LearnLink on their course.

Procedures: Three classes were set up to use LearnLink in three different ways with two different instructors. The first instructor only used LearnLink to answer questions and post announcements. The second instructor use sub-conferences to increase participation. An initial survey was administered to assess students' attitudes and level of previous use of LearnLink. If they answered "yes" then a series of questions about their most recent LearnLink experiences were asked. The students were asked to write the month and date of their mother's birth so surveys would remain anonymous. Questions of the follow-up survey were the same as the initial survey with the focus on LearnLink. In addition, students were asked about problems in the courses, assessment of obstacles with LearnLink, and an overall rating of each course.

Results: The results to the initial interest survey were that the students liked LearnLink for asking questions, receiving announcements, assignments, and grades. They did not like LearnLink for testing and turning in assignments or access readings. The end of term survey showed there was no association between gender and use preferences. The student's openness to using LearnLink showed no significant relationship between student's year in school or their openness to CMC. There was also no significant relationship between previous uses affecting their desired use. When factoring in the use and interaction variables there was a significant relationship. These additional variables suggest that the ways in which CMC is utilized has the greatest effect on student ratings.

Comments: Computer access and usage is becoming more and more prevalent among college students. Even if students don't have computers of their own, there is sufficient access to computers at the college to meet their needs. As with any new technology, we are first apprehensive and anxious about using it. But today's college students are ready, willing, and able to embrace computer-assisted learning. I do feel, like the author, that this article left the question unanswered, "does CMC affect student performance?" We always need to know if the we are enhancing the learning environment with our new technology.


Summary of an Experimental Study

Reference: Conklin, Karen A. (1999, Fall). Measuring Computer Usage in a College Setting. Journal of Applied Research in the Community College 7(2), 25-31.

Abstract:: This paper describes a two-part study of computer familiarity and use. It was conducted in the spring of 1998 at Johnson County Community College (JCCC), a suburban community college in the Midwest. The purpose of this study was two-fold. The first goal was to ascertain students' familiarity and future computer needs. The second goal was to determine where the faculty stood with technology in terms of (a) the degree to which faculty members were utilizing computers, (b) faculty assessments of how the college was addressing computer related needs, and (c) faculty plans for incorporating computer technology into their instructional work, both in planning and teaching classes.

Review: The author begins by stating "The rapid increase in the use of technology in education has altered nearly every facet of the learning process for both faculty members and students." References are then sited to support her proposal. With the reasons and goals of her study clearly defined, the author proceeds with presenting the results of several surveys administered to students and faculty.

A total of 452 students in 49 selected classes completed surveys, with respondents representative of the student body in terms of age and gender. Results revealed that three out of four students considered themselves at least somewhat experienced with computers, while only 3.5% indicated they had no computer experience. Three out of four respondents indicated they completed assignments on their home computers. One in two respondents indicated they would seldom or never need campus computers to complete assignments.

Approximately 44% of the faculty completed the survey; the author did not give the actual number (n). Nine out of ten faculty were using computers, mostly for word processing, e-mail, or Internet access. Only one in three respondents were building web pages, developing multimedia presentations, or using grade book and test generation software. Of those not currently using computers in their work, approximately one-third planned to begin using them within the next year or two, one-third were unsure, and one-third did not plan to begin using computers in the near future. Problem areas identified by faculty respondents differed for full-time versus adjunct faculty (although differences were not tested for statistical significance by the author). But both groups generally highlighted the need for more technical assistance for both hardware and software-related problems, the need for more frequent upgrades of software and hardware, and the need for more training. A recurring comment dealt with the problem of faculty finding time to train, produce slide shows, or use the college's resources to best advantages.

The author concludes by stating that "Results of the student and faculty usage studies have helped the college plan its expanded use of technology and design appropriate training opportunities for faculty and staff," although the author did not go into specifics. Overall, no major discoveries were revealed beyond what previous studies have shown.

Comments: It appears the author knew the age and gender make-up of the surveyed students since she claimed they were a representative sample of the college. I would like to have seen some type of correlation study done regarding age/gender (or even ethnicity, full-time/part-time status, degree plan, etc.) versus computer usage. Perhaps some groups of students would show differing degrees of computer usage, thereby possibly impacting the college's technology related strategies.


Summary of an Experimental Study

Reference: McIntyre, David R., Wolff, Francis G. (1998). An experiment with WWW interactive learning in university education. Computer & Education, 31, 255-264

Principle: In this study the principle is that students who have access to interactive WWW instruction learn better than students who are given only classroom instruction.

Class 1 using interactive WWW
Class 2 not using the WWW

Design: Students were not randomly assigned to a class. Number of students in each class was not stated.

Independent Variables: The independent variable in this study was the use of the interactive WWW learning. One class was instructed in how to use the available interactive learning on the web site that was created and encouraged to interact with other students via the web site. The other class was given instruction in the classroom and was not given access to the web site.

Dependent Variables: The dependent variable was the achievement of the students in each of the classes. The test scores of Class 1 were compared to the test scores of Class 2.

Procedures: The two groups of students were not given a pre-test prior to taking the class. Classroom instruction was given to Class 1 and supplemented with interactive lessons on the web site. Instruction was given to Class 2 in the classroom with additional classroom time for those students who needed additional instruction. A test on C pointers was given to both classes after instruction.

Pre-test Post-test
Class 1 Using interactive WWW None Classroom Test
Class 2 Not using the WWW None Classroom Test

Results: Class 1 - Blue, Class 2 - Red

Comments: I thought this study on the use of the WWW and it's effect on achievement was very interesting. However, the study was not specific on number of students involved and only tested the students achievement one time.


Summary of an Experimental Study

Reference: Lee, Myung-Geun. (2001). "Profiling students' adaptation styles in Web-based learning," Computers and Education, 36, 121-132.

Principle: This article discusses Web-based instruction and how adaptation styles effect this type of learning.

Design: 1000 questionnaires were given to students attending 11 similar Korean universities experimenting with Web-based instruction since 1998 under the Ministry of Education. The questionnaire asked questions consisting of 4 parts. The first part asked about background information of the subjects. The second part assessed the subjects' perceptions of the instruction learning environment. The third part asked about specific instruction-learning processes and the fourth part asked the subjects' perception of overall Web-based instruction.

Independent Variables: 334 questionnaires were collected for this study based on most items being adequately responded to (33.4% response rate). Of the respondents, 177 (53%) were female and 157 (47%) were male. The first section of the questionnaire asked questions about sex, age, status, and major.

Dependent Variables: The dependent variables are the participants' computer literacy level, their participation, interaction with the instructor, difficulty of the content, rate and quality of the instructors response, interaction with peers, providing of an on-line database, use of an on-line database, and any internet access fees.

Procedure: A cluster analysis on the survey data was used to identify Web-based learners' adaptation styles. Since the data was meaningless due to the small number of instances, these adaptation styles were retrospectively analyzed by looking at the learners perceptions of their own achievements, their satisfaction with the overall Web-based learning, and their learning context directly related with their perceptions. Each adaptation style was then examined by analyzing variables that were more related to a particular adaptation style. Data was processed using the SAS for Windows program.

Results: The interrelationship between Web-based learners' perception of their own academic achievements and their satisfaction with overall Web-based instruction was statistically significant. Four distinct adaptation styles were noted including "model learners", "maladaptive learners", "disenchanted learners", and "fanatic learners". Four dependent variables effected Web-based instruction and adaptation styles the most. These were difficulty of the content, quality of instructor responses, effects of interaction with peer students, and use of an on-line database. Model learners used instructors and peers more effectively than the other learner styles.

Comments: Based on the research, it appears that the more a student uses the resources available to them in Web-based learning, the better the student will do academically and the overall satisfaction with Web-based learning will increase. If the instructor takes the students adaptation styles into consideration and guides the students on how to make their Web-based class more productive and meaningful, the Web-based class will be more successful. The author of this article feels that more research with better surveys will lead to more meaningful results.


Summary of an Experimental Study

Reference: Enyedy, N., Gifford, B. & Vahey, P.( 2000). Learning Probablity Through the Use of a Collaborative, Inquiry-Based Simulation Experiment. Journal of Interactive Learning Research, 11(1), 51-84.

Principle: This article attempts to assess whether a computer simulation aids in the instruction of a probability unit.

Design: The sample for this experiment consisted of two seventh grade classes that implemented the PIE (Probability Inquiry Environment) curriculum (computer simulation) and two sections of seventh graders taught in a traditional manner. Each class was administered a pre and posttest.

Independent Variables: The use of a computer simulation in addition to the traditional curriculum.

Dependent Variables: Dependant Variable: Student achievement in the mathematics domain of probability.

Procedure: The pre and posttests were constructed from standardized tests and suggestions from the NCTM (National Council of Teachers in Mathematics). The research focused on the posttest and even gave examples of actual test questions. The research did not mention the pretest, other than it was easier than the posttest. The test was multiple choice and short answer. A blind scoring process was used so that the researcher did not know to which groups the students belonged.

Results: A three way analyses of variance was carried out on three factors: condition, gender, and test score. Both the condition and the test score had a significant main effect. The gender factor was found to have no main effect. In addition, t-tests were done on the pre and posttests. There were no significant differences between the two groups on the pretest, but a significant difference on the posttest.

Comments: As a math teacher, probability continues to be a topic that students struggle to learn. It is important to have students conduct probability experiments, but I can see how a computer simulation could make things more efficient and time could be better spent on discussion.