Timothy J. Xeriland, Ph.D. Candidate

Promotional Film



I have been teaching in higher education for over 14 years.  In my first two years of teaching, my classes were heavy on assessment.  The majority of the grade came from four difficult tests and a final.  Even though my classes soon became known as the most difficult at the university, I wasn’t convinced that the students were learning the material on a deep level.

After much consideration, I decided to do an experiment and have a class where the entire grade was based on a portfolio.  Students actually rebelled and hated the idea of not having tests.  I held fast and gave them a lot of writing assignments to include in their portfolios.  Each week, I would collect the portfolios and read through them leaving comments about improvements they could make (that’s right, no grades just comments).

At the end of the semester, students submitted large portfolios with all their work.  The amazing part was that many of the portfolios were on par with what I would expect graduate students to produce not incoming freshmen.  Out of curiosity, I decided to give them my typical final just to see how they would do, but not count it for a grade.  They moaned and groaned, but took the test.  Amazingly, on average they scored much higher than my classes that had tests all semester.  What was going on here?  Let's find out...

Research Interest

Initial Statement of Interest
The past several decades have seen explosive growth in two areas of education: assessment and online learning.  In recent years, the efficacy of formative assessment has been demonstrated.  Formative assessment can be defined as something that evokes information and then uses that information to modify teaching and learning. For logistical reasons this type of assessment is less often used in an online environment.  I would like to determine how efficacious formative assessment is for student retention, student interest, and student content mastery in an online environment.  In addition, I hypothesize that formative assessment would be of particular importance for the online mathematics students.

Final Revised Statement of Interest
I am interested in understanding the effects of formative assessment on student retention, student interest, and student content mastery in an online environment. In particular, my research is focused on how formative assessment may be beneficial for online undergraduate mathematics students.

Development of my Statement of Interest
My initial statement included extra background information and was not direct enough.  After feedback from my Base Group, I modified the wording of the definition of formative assessment. Upon reading many articles, I encountered different definitions of formative assessment which has led to a conceptual change in my thinking.  The definition can vary based on the research questions being studied and because of this I removed a specific definition from my statement of interest.  Final adjustments incorporated a change in the use of the word effectiveness due to the expectations associated with it.  By replacing it with effects, I am avoiding qualifying the effect which allows for more variability.  I also narrowed my target population to undergraduate students.  These changes have tightened and clarified my research statement not only for myself but for others.  This has been an exciting process for me.  I have been contemplating this topic for many years and having it come together in a more concrete form has served to focus my thinking. I am eager to see where this road leads me.

Three Experts
  • Paul Black, Emeritus Professor of Science Education, King's College London.
    His work involves assessment - both formative and summative - and the school curriculum in science and in design and technology. He coauthored the well-cited paper on formative assessment, "Inside the Black Box: Raising Standards through Classroom Assessment." It is because of Paul Black's research that additional work is being pursued in the theory of formative assessment and its role within pedagogy, differences between different school subjects in their formative assessment practices, and E-assessment and summative assessment. While most research on formative assessment is focused on student content mastery, Paul Black has had a special emphasis on student interest. He has found convincing evidence that formative assessment increases student interest. It is because of Paul Black's work involving student interest that formative assessment is so intriguing to me. Anecdotally, I have found that students who are interested in the subject matter ultimately learn the material in a deep manner. One of the main dependent variables that I would use in my own research is student interest and I would consult the work of Paul Black in doing so. I specifically consider student interest in my research statement because of his research.

  • Dylan Wiliam, Deputy Director, Institute of Education, University of London.
    His main interest is in exploring how assessments may be used to support learning. He coauthored the well-cited paper on formative assessment. "Inside the Black Box: Raising Standards through Classroom Assessment." The results of his work will continue to guide me as I consider the effects of formative assessment on student retention, student interest, and student content mastery. Dylan Wiliam is engaged in public speaking and publishes up-to-date podcasts on formative assessment. Through his use of new media communication, I am gaining the very latest research developments on this topic. Some of his recent work emphasizes formative assessment in an online environment. Because of the compelling information that he is providing, my research concentration is specifically in the area of online learning. For example, Wiliam found online environments have 60% less formative assessment strategies in use than that found in the classroom. Because of the growth of online learning and the effectiveness of formative assessment that has been demonstrated in the classroom, I specifically tie my research interest to the online environment. I will be looking to Wiliam's work to further my understanding of these connections.

  • Beverley Bell, Associate Professor, Department of Professional Studies in Education,
    School of Education, The University of Waikato, New Zealand.

    Her research and teaching is in the area of pedagogy, learning, teacher education and assessment, with a focus on science education, secondary education, and teacher development. She specializes in formative assessment in regard to science and mathematics. This is particularly relevant to my research interest and will be beneficial to my work with online undergraduate mathematics students. Beverley Bell's expertise in the area of mathematics brings great insight for how I should best proceed with my focus. Moving forward with my research interest, I am looking at follow-up questions to her studies that are guiding my thinking. For example, in her study "A Model of Formative Assessment in Science Education," I am left with the question, "What type of teacher training is needed to develop the skills of formative assessment?"

Annotated Bibliography

Black, P., & Wiliam, D. (1998). Inside the black box. Phi Delta Kappan, 80(2), 139.
This article considers the method of formative assessment to improve classroom results by increasing the effectiveness of the teacher. The claim is that teachers need to adapt to their students' needs. Feedback is specifically labeled formative assessment “when the evidence is actually used to adapt the teaching work to meet the needs.” The authors compiled results of research literature and determined there is evidence that 1) improving formative assessment raises standards, 2) there is room for improvement, and 3) we know how to improve. They claim that formative assessment is a crucial aspect of improving classroom activities and to focus on increasing the quality of teacher-student interactions and assisting students out of the ‘low-attainment’ trap is the key.

One strength of this article is that it combines results from a wide range of studies, however its weakness is that it does not have any original research of its own and it requires going back to the original studies to analyze and verify the results. Even though, this is still considered the definitive paper on formative assessment and captured my attention many years ago. I have seen firsthand the impact formative assessment can have on student retention, interest, and content mastery in the classroom. I took note that in their review of the research literature, they found effect sizes as great as 0.7 which is much larger than what is found in other educational interventions. An effect size of 0.7, if realized, would have a tremendous impact on student performance. It seems to make sense to focus on formative assessment and its effects. I would like to specifically look at these effects in an online environment. What are the similarities and differences of formative assessment in face-to-face and online courses? I would like to see if similar effect sizes can be found in an online environment. This could be analyzed in regard to the method of feedback provided and the differences between formative and summative assessment.


Buchanan, T. (2000). The efficacy of a world-wide web mediated formative assessment. Journal of Computer Assisted Learning, 16(3), 193-200.
The focus of this study was to evaluate the effectiveness of a Web-based formative assessment program, the Psychology Computer Assisted Learning (PsyCAL). The program is a computerization of formative assessment and Internet-mediated teaching with the goal of “closing the gap between actual and desired levels of performance.” Correct answers to missed problems were not given. Instead, a reference to the information in the text where the correct response could be found was given. Two studies were done—Study 1, use of PsyCAL was required and Study 2, use of PsyCAL was optional. In Study 1, use of the program was positively correlated with exam performance and there was a 10% difference in final exam scores from those who did not use the program even though it was compulsory. In Study 2, there was a low sample size and participants were self-selected and the researchers noted a significantly better Project element, but no difference in the SPSS element of the final assessment. PsyCAL is an example of the “meaningful interaction between student and instructional material…an essential component of successful pedagogy…provided through technology.” It is an alternate opportunity from which students benefit.

I am interested in formative assessment techniques and programs for undergraduate mathematics students. Math courses are many times the gatekeeper classes that students struggle to pass through. By implementing various formative assessment processes, I believe there will be a significant increase in content mastery thus allowing students to progress in their overall plan of study. With the use of automated formative assessment, there is immediate feedback and the online student obtains a similar benefit as a face-to-face student does in classroom interactions with the instructor. This automation of content is an interesting aspect in that it allows for more timely responses—one of the characteristics of good formative assessment. I found it interesting that “usage statistically significantly predicted performance, even when class attendance was controlled for.” However, the research was based on observation measures and has me thinking about the necessary conditions for a more rigorous experimental study. Also, the PsyCal feedback did not provide the correct responses and there is potential to investigate the difference, if any, with a program that does provide correct responses as the method of formative assessment.

Cowie, B., & Bell, B. (1999). A model of formative assessment in science education. Assessment in Education: Principles, Policy & Practice, 6(1), 101.
This paper is a result of the study of formative assessment in the science classrooms of 10 teachers and it outlines the features of formative assessment. It focuses on two types of formative assessment—planned and interactive. Planned formative assessment typical occurs with the whole class while interactive formative assessment is more typical in small groups or one-on-one. This article considers how these types of formative assessment are related to each other and to the teaching and learning process as well as the role of the teacher’s own pedagogical knowledge. It was noted that the “ability to discriminate between relevant and irrelevant information” is essential in the process of formative assessment and improves with experience. Teachers must elicit and interpret information for planned formative assessment. They must recognize and respond to opportunities for interactive formative assessment and both are skilled tasks.

In an online setting, much of the interaction is one-on-one between the teacher and student and lends itself to methods of interactive formative assessment. In the light of my own interest, this article leads me to several questions. 1) What type of activities can be included in an asynchronous environment for the more formal and planned formative assessment? 2) Are these methods as effective in an online setting as in a classroom setting? and 3) What type of formative assessment activities can be fostered amongst the students themselves, if any? The identification of specific formative assessment activities that impact student retention is a vital piece of my research mission. I am realizing that there are so many factors interacting and that the research question(s) and design are paramount. One possibility for a future experiment would be to have student performance as the dependent variable, and the two independent variables would be an online class and a face-to-face class with formative assessment as the treatment for both groups. In this case, I would be looking to see if there were statistically significant differences in performance due to class environment.

Ginsburg, H. (2009). The challenge of formative assessment in mathematics education: Children’s minds, teachers’ minds. Human Development (0018716X), 52(2), 109-128.
In this paper, Ginsburg looks closely at using formative assessment in teaching mathematics to children. He first defines three basic methods for assessment: observation, task and clinical interview. Of these, Ginsburg points out that the clinical interview is superior to the others when teaching mathematics. This is because the interview can help the teacher understand the thought process of the student. For example, carefully noting pauses in children’s responses can help identify whether a student has simply memorized an answer or is using an invented strategy. Another crucial point is that using interviews can help teachers identify “bugs” in students understanding of mathematics. For Ginsburg, formative assessment can be thought of as a special case in teaching where obtaining a detailed understanding of children’s thinking process is used to inform instruction.

Because this paper specifically deals with formative assessment in regard to mathematics it was of great interest to me. The concept that formative assessment works well in identifying the “bugs” in our thought process is compelling. Although this paper exclusively focuses on children, I have every reason to believe that the same holds true for adults. Another reason that this paper is useful to my research is that it outlines obstacles in using formative assessment with mathematics. Some important considerations are if the teacher is qualified to work through the mathematical thought process of students and if the text book is designed to have students think about mathematics in a non-superficial way. Thus in designing my experiment, I need to consider utilizing materials that require deep thinking of mathematical concepts and ensure that the instructors are capable of working through the thought processes needed. Although this paper did not do any specific measuring of data, it provided a lot of useful insights on how to best design formative assessment principles in relation to mathematics.

Hodgen, J., & Marshall, B. (2005). Assessment for learning in English and mathematics: A comparison. Curriculum Journal, 16(2), 153-176.
Here the authors examine subject differences and how these significant differences impact formative assessment. Much of prior research has been focused on generic strategies that can be applied to all content areas. English and mathematics provide students with literacy and numeracy and thus are typically the focus of summative assessment in schools. The goal of this study is to consider the parallelism in which formative assessment is realized in the classroom while examining subject-specific domains. They identify three parts of formative assessment—scaffolding, the regulation of learning, and guild knowledge—yet state that it cannot be easily proceduralized. That is to say that subject-specific qualities of formative assessment are needed.

One of the strengths of this article is that it recognizes subject differences in formative assessment techniques which addresses an important issue and opens the door for further research. Techniques and strategies of formative assessment that work for one discipline are not certain to work to the same degree in another. Identifying formative assessment methods that are effective in a mathematics curriculum is a strong area of interest for me. I would like to focus on the three parts of formative assessment identified by the authors and how they can be incorporated into an asynchronous environment in the mathematics discipline. Thus, one of my goals is to determine formative assessment techniques specific to mathematics that have a significant effective on student retention. As researchers, I think that we should not only strive to determine what methods are successful, but also how to implement them in a practical way. The idea presented in this research endeavor that formative assessment should be subject based is a valid consideration and I will specifically be considering its effects in mathematics.

Wang, K., Wang, T., Wang, W., & Huang, S. (2006). Learning styles and formative assessment strategy: Enhancing student achievement in web-based learning. Journal of Computer Assisted Learning, 22(3), 207-217.
This study considered the effects of formative assessment and learning styles on student achievement in Web-based learning. Subjects were giving Kolb’s Learning Style Inventory then randomly placed into one of three treatment groups. A one-way ANCOVA revealed that both learning style and formative assessment were significant factors in student achievement. The research focused on three questions. 1) Do learning styles and formative assessment affect achievement? 2) What type of formative assessment facilitates learning in a Web environment? and 3) What type of learning style best suits learning in a Web environment? The study concluded that both formative assessment and learning style are significant factors in achievement with no significant interaction effects. Furthermore, post hoc analysis showed that the group with the most formative assessment strategies performed significantly better than the other two groups. And post hoc analysis of learning styles revealed that the mean score of the Assimilator was significantly higher than the Accomodator and the Converger, but not the Diverger. Overall, the study supports diverse Web-based formative assessment techniques and consideration of learning styles to enhance student achievement.

This research has a particular focus on learning styles. In the beginning stages of planning my research, I am considering all undergraduate mathematics students. The article raises the question to consider having groups that come from a population of specific learning styles. Is this a necessary component in the effectiveness of formative assessment? I do not believe it is, but this article has broadened my thinking about the wide range of confounding variables involved. Furthermore, I believe that formative assessment is an essential component of successful pedagogy and one that can be provided through technological resources. This was a quasi-experimental design that determined “both formative assessment strategy and learning styles should be taken into account in the design of Web-based learning environments” And I immediately ask, “how and what are the practical processes; and what level of formative assessment is enough to be effective?” Since this study showed a trend of increased performance with greater formative assessment techniques, I see a future study that takes it further and considers effects on performance within a solely formative assessment environment.

Yue, Y., Shavelson, R., Ayala, C., Ruiz-Primo, M., Brandon, P., Furtak, E., et al. (2008). On the impact of formative assessment on student motivation, achievement, and conceptual change. Applied Measurement in Education, 21(4), 335-359.
This study hypothesized that formative assessment has a “beneficial impact on student’s science achievement and conceptual change, either directly or indirectly by enhancing motivation.” However, the data did not support this and possible explanations were the teachers’ classroom management and the level at which they incorporated informal formative assessment techniques. The researchers note that given the wealth of studies involving formative assessment few have been in regular educational settings as this study is. Formative assessment was theorized to increase motivation and promote conceptual change. In this study, neither the control nor the experimental teachers were notified about the true nature of the experiment. Both a motivation questionnaire and achievement assessments were developed to quantify the impact of embedded formative assessment and both displayed acceptable internal consistency. The experimental group scored higher on most positive motivation scores but lower on achievement at pre-test. Post-test results showed similar outcomes on positive motivation and the control group significantly outperformed the experimental group on achievement. Thus, the treatment seemed not to have a statistically significant impact. “…this does not disconfirm the effectiveness of formative assessment. Rather, it provided evidence for the difficulty and importance of effectively implementing formative assessment.” Researchers did find that the gap in achievement was not as high in the experimental group as in the control group which is consistent with prior research on formative assessment for low achievers.

The outcomes of this study are of particular interest because they did not show significant results for the effectiveness of formative assessment. It recognizes the difficulty of isolating the causal relationship between formative assessment and achievement. Extremely careful thought must be taken in the design of the methods of my research scheme. I believe formative assessment is effective and there is a plethora of research to support that belief, however, an empirical study is needed to solidify this reasoning. The results of this study have me questioning how I could structure my design in a regular setting and capture data to support my research hypothesis. And how I could incorporate random sampling for the control and experimental groups effectively controlling the confounding variables at play. One way to investigate the effectiveness of teachers using formative assessment techniques would be to have two groups, one receiving specific training on formative assessment and one that would not receive any training on formative assessment.


Upon completing this summer term, I plan to go through the more than 30 research articles I have accumulated and narrow them down to those most relevant to my research interest. I want to re-read some of the articles I have gathered more deeply and find the connections between them as well as parsing out the differences. I am eager to delve into the articles that I have yet to read and enhance my awareness of what has already been uncovered about my subject. I know that I will need a comprehensive understanding of formative assessment and the more information I can sort through the better.

Of particular interest to me is the work of Mantz Yorke and the connection he has studied between formative assessment and retention. One of his articles discusses formative assessment in the context of higher education and retention, mainly in the critically important first year of a program. This is highly relevant to my own research interest and can assist me as I move forward.

Through my work on the Article Critiques and the reading I have done, I have increased my knowledge and improved my understanding of empirical research. In regard to these studies, most of the statistical research deals with correlations, t-tests and analysis of covariance. Because of this I know that I need to learn much more about these design methods and measures needed to study the effects of formative assessment. I also know that for my research to be meaningful much thought and preparation is required in constructing my project. I need to learn more about the proper structure of experimental and control groups.

I envision a research project conducted online with two dependent variables: learning gain and interest. I would break my sample of college students into two groups, one group receiving summative assessment and another group receiving formative assessment. I would administer both a pre- and post-test to measure the improvement in achievement scores. I would also conduct a survey to measure student interest.

Overall, this process has brought me to more questions to which I am ready to find answers. In depth analysis of formative assessment is my goal, but keeping in mind how this fits within the larger scheme of what methods educators can specifically apply is also important to me. After all, what good is it to know how something works, but not know how to use it?



Note: This area is specifically for research. All other assignment for the EPET Program can be found Portfolio -> Classes.