Thursday, March 27, 2014

Conceptualizing the New Assessment Culture in Schools

Menucha Birenbaum (In press) uses a complex-systems framework to conceptualize assessment in schools in a chapter titled, Conceptualizing assessment culture in school

Acknowledging the recursive interactions that exist and the way students, educators, schools, and systems learn, she asks readers to consider classroom learning and teacher professional learning. Making thoughtful connections, Birenbaum (In press) proposes a way to conceptualize the new assessment culture that is emerging in schools. This well-written, thoughtful chapter is an insightful analysis of current research and thinking.

Part of the chapter considers the assessment culture mindset from the perspective of seven indicators:

1.    ‘It's all about learning’ (learning-centered paradigm)
2.    Assessment drives teaching and learning
3.    Assessment means dialogue/interaction with the learner
4.    Assessment empowers the learner
5.    Diversity is desirable
6.    I/We can do it!
7.    Modesty (in assessment) is required

Each indicator is illustrated by quotes from students, teachers, and principals in schools with an assessment culture. For example, in the section ‘Assessment drives teaching and learning’ ateacher in the middle school is quoted, 

“I know if the students understood according to their questions, according to how they work and learn, I can say that for me the students are like a mirror [of my instruction] … so I have to look at the mirror and examine their understanding.”

This is a thought-provoking chapter that will have teachers and leaders alike rethinking the assessment culture in their classrooms and schools.


To appear in: Wyatt-Smith, C., Klenowski, V. & Colbert, P. (Eds.) (In Press). Assessment for Learning Improvement and Accountability: The Enabling Power of Assessment Series, Volume 1. (Ch. 18, pp. 285-302). Berlin: Springer (


Wednesday, March 26, 2014

Self-regulation, Co-regulation, and Learning to Dance

We’ve been hearing and reading more and more about self-regulation, its role in learning, in executive functioning, and so on. Self-regulation is an essential part of assessment research as it is a way to describe what learners do as they self-monitor their way to next steps in their learning. Researchers with an assessment lens – including the international delegates attending Fredericton, NB, in April 2014 such as Linda Allal (2007), Heidi Andrade (2013), Menucha Birenbaum (1995), Ernesto Panadero (2013), and Dylan Wiliam (2007)  to name but a few – have researched and written about assessment and self-regulation.

Co-regulation is another term I’m beginning to use – and those of you who read the research literature in French language communities may already be familiar with the term. I’ve reflected on the notion of co-regulation as I’ve observed teachers and students engaged in ‘assessment in the service of learning.’ As I've done so I’ve come to appreciate the way the term co-regulation invites us to consider and reflect upon the intricate nature of teaching, learning, and assessment. Or, as Willis & Cowie (In press) put it, “…the generative dance.”

Linda Allal (2011) discusses the difference between self-regulation and co-regulation. She writes, “The expression ‘co-regulation of learning’ refers to the joint influence of student self-regulation and of regulation from other sources (teachers, peers, curriculum materials, assessment instruments, etc.) on student learning (Allal 2007). One could also define it as processes of learning and of teaching that produce learning. The focus is thus on learning as the outcome of education and teaching is subsumed within the ‘co’ of ‘co-regulation’” (p. 332).

Consider this – as teachers engage students in self-assessment, goal setting, and self-monitoring their own learning in relation to co-constructed criteria, and then apply their growing understanding of quality and of process over time, Allal (2011) points out that we, “…activate the processes of metacognitive regulation” (p. 332).

As teachers go further and engage students in collecting evidence of their own learning, in student-teacher conferences and in parent-student conferences, students become more independent – they move from co-regulation to self-regulation (Allal, 2011). This isn’t a process that students do without support or as a result of some kind of scheduled ‘activity.’ Allal (2011) concludes by noting that with the support of teachers and in the interactive classroom environment a powerful relationship emerges – “a process of co-regulation that entails interdependency between self-regulation and socially mediated forms of regulation.” (Allal, 2011, p. 333).

As I was reading a piece submitted by Dylan Wiliam (2007), I was struck by this description of a mathematics classroom:

 “These moments of contingency—points in the instructional sequence when the instruction can proceed in different directions according to the responses of the student— are at the heart of the regulation of learning. These moments arise continuously in whole-class teaching, where teachers constantly have to make sense of students’ responses, interpreting them in terms of learning needs and making appropriate responses. But they also arise when the teacher circulates around the classroom, looking at individual students’ work, observing the extent to which the students are on track. In most teaching of mathematics, the regulation of learning will be relatively tight, so that the teacher will attempt to “bring into line” all learners who are not heading towards the particular goal sought by the teacher—in these topics, the goal of learning is generally both highly specific and common to all the students in a class. In contrast, when the class is doing an investigation, the regulation will be much looser. Rather than a single goal, there is likely to be a broad horizon of appropriate goals (Marshall, 2004), all of which are acceptable, and the teacher will intervene to bring the learners into line only when the trajectory of the learner is radically different from that intended by the teacher.” (p. 1088-89).

Doesn’t this sound like co-regulation? Students were working independently from the teacher yet the teacher was there – present – ready to bring emerging issues and questions back to the group to inform the learning of all. And, in doing so, providing a demonstration of ‘self-regulation’ or one could use the lenses of ‘scaffolding’ or ‘social construction of knowledge. It reminds me of a chapter written by Sandra Herbst (2012) that includes the transcript of a Grade 12 applied mathematics class taught by Rob Hadeth. Rob very clearly knows 'the dance' and how to help students become self-regulated successful learners. It is interesting to me how research follows practice - I suppose it must given that students and teachers together are continually forging new ground. Researchers are the ones that come along to help educators understand the magic being created or that could be created.

In this post I’ve just touched on a few of the articles related to self-regulation and co-regulation submitted by the International delegates. I encourage you to pursue this topic further and the references below are a great starting point.


Andrade, H. (2013). Classroom assessment in the context of learning theory and research. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 17-34). New York: SAGE.

Allal, L. (2011). Pedagogy, didactics and the co-regulation of learning: a perspective from the French-language world of educational research, Research Papers in Education 26(3): 329-336. To link to this article:

Herbst, S. (2013). Assess to success in mathematics. In A. Davies, S. Herbst & K. Busick (Eds.) Quality Assessment in High School: Accounts from Teachers. Courtenay, BC: Connections Publishing and Bloomington, IN: Solution Tree Press.

Birenbaum, M. (1995). Assessment 2000: Towards a pluralistic approach to assessment: In M. Birenbaum & F. J. R. Dochy (Eds.). Alternatives in Assessment of Achievements, LearningProcesses and Prior Knowledge (pp. 3-30). Boston, MA: Kluwer.

Panadero, E. & J. Alonso-Tapia. (2013). Self-assessment: theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electronic Journal of Research in Educational Psychology 11(2): 551-576.

Wiliam, D. (2007). Keeping learning on track: formative assessment and the regulation of learning. In F. K. Lester Jr. (Ed.), Second Handbook of Mathematics Teaching and Learning. Greenwich, CT: Information Age Publishing.

Willis, J. & Cowie, B. (In Press). Assessment as a generative dance: Connecting teaching, learning and curriculum. In C. Wyatt-Smith, V. Klenowski & P. Colbert (Eds.), Designing Assessment for Quality Learning: The Enabling Power of Assessment Series, Volume 1. Netherlands: Springer.

Thursday, March 20, 2014

Teaching Learners to See

Recently I wrote a blog about quality and moderation. I used the example of a kindergarten teacher working with 5- and 6-year-old students by looking at samples of writing. Through this process the teacher is deliberately teaching about quality, deliberately teaching students the language of assessment, and deliberately teaching students how to self monitor their way to success. In our work with teachers at all levels (K-12) we have engaged in similar processes in mathematics, science, social studies, English, the arts, and so on. To put it simply, it is a way to teach students how to give themselves incredibly powerful specific feedback to guide their own next steps. And, because we do this work with all students, they can give powerful peer feedback about quality that supports learning (not marks, grades, or scores that get in the way of learning).

After reading my blog, Royce Sadler shared another article that he wrote that describes a similar process with post-secondary students. In this powerful article (2013) Royce Sadler writes,
“Feedback is often regarded as the most critical element in enabling learning from an assessment event. In practice, it often seems to have no or minimal effect. Whenever creating good feedback is resource intensive, this produces a low return on investment. What can be done? Merely improving the quality of feedback and how it is communicated to learners may not be enough. The proposition argued in Sadler (2010) is that the major problem with feedback is that it has been to date, and is virtually by definition, largely about telling.
Research into human learning shows there is only so much a person typically learns purely from being told. Most parents know that already. Put bluntly, too much contemporary assessment practice is focused on communicating better with students.
Teaching by telling is commonly dubbed the transmission model of teaching. It portrays teachers as repositories of knowledge, the act of teaching being to dispense, communicate or 'impart' knowledge for students to learn. Consistent with that was an early conception of feedback as 'knowledge of results' – simply telling students whether their responses to test items were correct or not. Telling is limited in what it can accomplish unless certain key conditions (touched upon later) are satisfied. By itself, it is inadequate for complex learning. Being able to use, apply, and adapt knowledge, or to use it to create new knowledge, requires more than merely absorbing information and reproducing it on demand.” (2013, p. 55)

Royce Sadler goes on to describe the process he used with a group of students and concludes with this statement,
“Much more than we give credit for, students can recognize, or learn to recognize, both big picture quality and individual features that contribute to or detract from it. They can decompose judgements and provide (generally) sound reasons for them. That is the foundation platform for learning from an assessment event, not the assumption that students learn best from being told. They need to learn to discover what quality looks and feels like situationally. They need to understand what constitutes quality generally, and specifically for particular works. Equally, students need to be able to detect aspects that affect overall quality, whether large or small, and understand how and why they interact. Students need a vocabulary for expressing and communicating both what they find and how they judge, at the least for that part of their evaluative knowledge they can express in words. Only after students have acquired a sufficient basis of appropriate tacit knowledge can they can understand the content and implications of a marker's feedback. At that point, feedback can be effective as learners become more discerning, more intuitive, more analytical, and generally more able to create, independently, productions of high quality on demand.”  (2013, p. 62)
I strongly recommend you find this very readable paper to enjoy. Royce Sadler and I sat beside each other at the International Symposium in Chester 2001. Although he not a member of the Australian team for the 2014 International Symposium in Fredericton, NB, you can get to meet him through his writing. His work continues to inform the research and 'in classroom' practical work of all of us interested in classroom assessment.

Sadler, D. R. (2013). Opening up feedback: Teaching learners to see. In S. Merry, M. Price, D. Carless & M. Taras. (Eds.), Reconceptualising Feedback in Higher Education: Developing Dialogue with Students. (Ch. 5, 54‑63). London: Routledge.
Abstract. Higher education teachers are often frustrated by the modest impact feedback has in improving learning. The status of feedback deserves to be challenged on the grounds that it is essentially about telling. For students to become self-sustaining producers of high quality intellectual and professional goods, they must be equipped to take control of their own learning and performance. How can students become better at monitoring the emerging quality of their work during actual production? Opening up the assessment agenda and liberating the making of judgments from the strictures of preset criteria provide better prospects for developing mature independence in learning.

Wednesday, March 19, 2014

Quality, Moderation, Professional Judgment, and Self-Regulation

When teachers use work samples with students to help them understand quality, they are inviting them to engage in a kind of social moderation process – a guided experience of analyzing student work. During this process, teachers guide students to understand the key attributes of quality in the work samples. [See Chapter 4 in Making Classroom Assessment Work (Davies, 2011) for a detailed description for teachers or Chapter 4 in Leading the Way to Assessment for Learning: A Practical Guide (Davies, Herbst & Parrott Reynolds, 2012) for leaders].

Over time this process helps students develop the language of assessment so they can self-monitor, self-assess, and engage in peer assessment. As students continue to engage in social moderation as part of the classroom assessment and instructional cycle, they learn to articulate what they’ve learned and share proof of learning with others – for example, through parent-student-teacher conferences.

This kind of classroom assessment and instruction is incredibly powerful. It deliberately teaches students how to self-monitor – to self-regulate – which, over time, leads to the development of powerful executive functioning skills. It is also a way to deliberately teach 21stcentury skills of analysis, synthesis, and critical thinking, to name a few.

Involving students in a social moderation process is more and more common as teachers come to understand the role of moderation in their own learning. For example, a teacher of 5- and 6-year-old children has a series of samples of writing and drawing from early development up to examples that are beyond where the most able writer in the class is currently working.

In small groups students gather to compare their day’s writing to the samples and talk about what they are currently doing that is similar to the sample and what is different. During this conversation of embedded instruction “next steps” ideas for subsequent work become clear. In this photograph you can see

Historically, moderation was a process used in large-scale assessment. There are different ways of going about the moderation process but, in general, it involves looking at student work with others, co-constructing criteria, developing a scoring guide, and selecting samples to illustrate quality. And then working to score student work and checking with others to ensure similar findings – a process of checking for inter-rater reliability.

Being involved in a process of social moderation has been shown to result in adults “learning to produce valid and reliable judgments that are consistent with one another and with stated achievement standards.” (Adie, Klenowski & Wyatt-Smith, 2012). It is also part of what caused the Assessment Reform Group in a publication titled, The role of teachers in the assessment of learning (2006), to say that teachers' professional judgement is more reliable and valid than external tests when they are engaged in looking at student work, co-constructing criteria, creating a scoring guide, scoring the work, and checking for inter-rater reliability. Teachers, even with students as young as 5 and 6 years of age are experiencing the same kinds of results with a classroom version of social moderation (Davies, 2012).

As part of my pre-reading for the International Symposium in April 2014, I’ve been enjoying articles by Linda Allal(2013) (written about in an earlier post) and by Lenore Adie (2013) and her colleagues, Val Klenowski and Claire Wyatt-Smith (2012). I’ve also revisited work by Graham Maxwell who will be in Fredericton, NB, in April 2014, as well as by a former International Symposium (2001) member, Royce Sadler, and a more recent article also focused on moderation.

These researchers are focused on what happens in the process of moderation and when systems deliberately engage teachers in learning about quality and gaining ‘informed professional judgment’ through the process of moderation. If this area is of interest to you, I encourage you to read their research and writing. Here is a selection of readings to get you started. And, of course, if you check the reference lists, you will find the “shoulders upon which they stand.” 

Recommended Reading: 

Adie, Lenore E., Klenowski, Valentina & Wyatt-Smith, Claire. (2012). Towards an understanding of teacher judgement in the context of social moderation. Educational Review, Volume 64, Issue 2, 2012.

Adie, L., Lloyd, M., & Beutel, D. (2013). Identifying discourses of moderation in higher education. Assessment and Evaluation in Higher Education. Volume 38, Issue 8, p. 968.

Davies, Anne, Herbst, Sandra & Parrot Reynolds, Beth. (2012). Transforming Schools and Systems Using Assessment: A Practical Guide. Courtenay, BC: Connections Publishing.

Hayward, Louise & Hutchison, Carolyn. (2013). 'Exactly what do you mean by consistency?' Exploring concepts of consistency and standards in Curriculum for Excellence in Scotland.Assessment in Education: Principles, Policy & Practice, Vol. 20 (1), 53 - 68.

Maxwell, Graham. (2010). Moderation of Student Work by Teachers.In: Penelope Peterson, Eva Baker, and Barry McGaw (Editors), International Encyclopedia of Education. Volume 3, pp. 457 - 463. Oxford: Elsevier. 

Sadler, D. R. (2013). Assuring academic achievement standards: From moderation to calibration. Assessment in Education: Principles, Policy and Practice, 20, 5‑19.
And the entire issue of: 
Assessment in Education: Principles, Policy & Practice. Volume 20 (1) 2013. Special Issue: Moderation Practice and Teacher Judgementwhich includes articles by Val Klenowski, D. Royce Sadler, Linda Allal, Claire Wyatt-Smith & Val Klenowski, Lenore Adie, Susan M. Brookhart, and others.

Tuesday, March 18, 2014

Classroom Observations: Who is Really in the ‘Driver’s Seat’?

It has always seemed to me that the days between “back to school” in January and Spring Break provide the “thickest” learning time of the school year.  I have often heard teachers say that they “push through a lot of the curriculum” in these three months.  It is no different for educators.  While we are celebrating the “100th  day” or beginning the new semester or writing another set of report cards or administering the provincial examinations, we are also attending several teacher conferences, workshops, conventions, or exploring many other professional learning opportunities.

Since the beginning of 2014, I have been in over a dozen classrooms doing demonstration lessons in BC, Saskatchewan, Manitoba, and Hawai’i.  They have included lessons in grade 11 algebra, grade 6 social responsibility, grade 10 social studies, grade 11 Canadian history, grade 3/4 science, and grade 7 mathematics.  Though I have worked in education for over twenty years, I still find it difficult to make my teaching public in front of students and teachers whom I have only just met.  It takes time to both prepare the lesson and harness the emotion in order to step out in front of adult and student learners with whom I am beginning to build rapport…a far cry from the trust that is built in schools with each passing day. 

However, once the instruction begins and the learning becomes visible, I often find myself in that place that Mihaly Csikszentmihalyi calls flow - a state of concentration or complete absorption with the activity at hand and the situation.  Additionally, the teachers who are observing are not in the role of evaluator.  Rather, they are tasked to collect evidence in areas that I have identified in our pre-observation conversation. That is, as the person being observed, I provided specific direction in regard to what the observers are asked to gather.  There are no expectations for analysis, judgment, or the assignment of value to my words and actions. 

In the past several months, I have asked the observers (usually between twelve and twenty in number) to pay attention to these three areas:

Instructional cycle – What stages of an instructional sequence can you document?
Teacher engagement – What do you notice that provides evidence of teacher engagement with the students and with the instruction?
Student engagement – What do you notice about the students and what they are doing and saying?

Please note that this stance is critical as we welcome one another into our classrooms to watch us at work with our learners.  This stance also sends an incredibly strong message - this process of observation is about the teacher, for the teacher, and directed by the teacher.

As we leave the classroom, I let the students know what we, as adults, are now going to do.  I tell them that the teachers are not going to tell me that the lesson was “good” or that it was “OK” or that it could have been better.  At best, I might feel great because of the first response, neutral because of the second, or a bit upset by the third.  Once the moment of emotion passes, I will be left with only questions.  Why did they think it was “good?”  What could I have done differently to make it better?  Often the students nod, seemingly in solidarity with the idea that I deserve more. 

We retreat in order to engage with the evidence that has been collected.  Teacher observers talk about what they noticed; in essence, they lay the data in front of me and I interpret that information through the lens of the decisions that I made during the instruction.  I make my thinking visible based on the evidence that they collected.  So, for example, if someone indicates that he noticed that I did not share an entire sample with students, but rather “chunked” the sample into smaller parts, I might talk about why I did that and what I expected the students to do after each chunk.  Or another teacher might talk about the sentence stem (I noticed that the work…) that I had the students use in order to offer feedback against the criteria that we had just constructed.  In that instance, I might talk about the stem as a way to build the vocabulary of assessment, rather than the language of evaluation.  Below, is a picture of what I wrote down on a whiteboard during a recent debriefing in order to more clearly explain what I was thinking about during the instruction.  This becomes an artifact of not only the debriefing phase, but of my learning, as well.

During the debriefing, I am again “in the driver’s seat.”  The role of the observer is not to offer a hypothesis, nor to put forward an opinion; this is a learning opportunity for me to look into my teaching through the evidence that is being presented.

As I talk with many, many teachers, the pervading sentiment is that it can be extremely uncomfortable for others, or even a single other, to come into their classrooms and observe the instruction and the learning.  This feeling is prevalent even in schools where high levels of trust are reported.  Individual teachers have spoken of the fear of being evaluated and judged. 

Though I am not suggesting that it is simple to shift that mindset, what this account reminds me of is that too often we observe with our own agenda at the forefront.  The observation serves as a vehicle to respond to our own questions.  Rather, the invitation to come into classrooms, when coupled with the stance that we are there to serve the one who is being observed, might begin to allow colleagues to view this process differently.  As the observer, I am here to attend to your needs and to help you to respond to the things that you are wondering about.  This can shift our perspective on what it means to have others observe our teaching and can provide a framework so that we can learn and reflect more.

Monday, March 17, 2014

But... Thou Shall NOT… Use Formative Assessments...

... as Part of the Summative Grade

A few weeks ago I was working with a group of secondary teachers on ideas connected to our latest book, A Fresh Look at Grading and Reporting in High Schools. At one point there was a murmur in the group. When I inquired about the murmur some of the teachers said, “But, we were told, 'Thou shall NOT use formative assessments as part of the summative grade.’”

The notion of evidence of learning – and what makes good evidence – is key when it comes to summative assessment. After all, everything a student does, says, or creates is potentially evidence of learning. What counts? It is all about purpose. Are you considering the evidence of learning in a formative way – to inform instruction? Or, are you considering the evidence of learning to determine how well and how much a student has learned? It is about purpose.

The evidence – observations, conversations, and products – that is used to determine the summative grade depends on the teacher. It is a professional decision. In A Fresh Look at Grading and Reporting in High Schools, Sandra Herbst and I write about the entire grading process but in this post I want to focus on one aspect of that decision-making process.

Should evidence of learning be excluded simply because it has been used to inform instruction during the learning? That is, should formative assessment information only be used for formative purposes? And, should summative assessment information be used only for summative purposes? Or, as the secondary teachers put it, “We understood that we were not to use formative assessments as part of the summative grade.”

Information is information. And, what we do with information depends on purpose. The Assessment Reform Group (one member, Gordon Stobart, is one of our International delegates) in a 2009 document titled, "Fit for Purpose” put it this way, “It should be noted that assessments can often be used for both formative and summative purposes. “Formative” and “summative” are not labels for different types or forms of assessment but describe how assessments are used.” (2009, 9)

One of the pre-reading documents by Ministry of New Zealand, submitted by the New Zealand team members for the International Symposium, puts it this way, “Sometimes assessment is referred to as being either “formative” or “summative.” The formative use of assessment information is an important part of everyday practice. It is a diagnostic process concerned with identifying achievement and progress and strengths and weaknesses in order to decide what action is needed to improve learning on a day-by-day basis. The summative use of assessment is concerned with “summing up” achievement at a specific point of time. However, these summations can be used not only to ascertain the level of achievement at a specific point of time but also to look back and consider what progress has been made over a period of time compared with expected progress.” (Ministry of Education, 2011, 14)

In summary, one needs to consider all the evidence of learning – the student’s entire learning journey – in order to better understand what and how much has been learned. When teachers make a summative assessment of a student’s learning, we engage in making an informed professional judgment.” And, to do so, we use all the information available to determine how well students have learned what they needed to learn, can do, what they need to be able to do, and can articulate what they need to articulate in relation to the standards or outcomes for the course or subject area.

Statements such as, “Thou shall not use formative assessment as part of the summative grade” are not helpful when ‘informed professional judgment’ is at work.


Mansell, W., James, M. & the Assessment Reform Group (2009) Assessment in schools. Fit for purpose? A Commentary by the Teachingand Learning Research Programme. London: Economic and Social Research Council, Teaching and Learning Research Programme.

Ministry of Education (2011). Ministry of Education Position Paper: KO TE WHĀRANGI TAKOTORANGA ĀRUNGA, Ā TE TĀHUHU O TE MĀTAURANGA, TE MATEKITENGA. Retrieved March 17, 2014 Ministry of Education Position Paper: Assessment [Schooling Sector]