Posts Tagged assessment
Examination and Assessment
While our definitions of what the subject ‘English’ is have shifted over the years, it is worthwhile considering whether attitudes to examination and assessment have shifted as much, especially considering the reported impact of standardised exam-based assessment on the realised delivery of the intended curriculum and the construction of student identity (cf. Gale & Densmore, 2000; Kohn, 1999). The assessment and reporting of learning is one major way in which the school system retains power over the knowledge that students are deemed to have acquired (Foucault, 1977), in particular when ‘technicist’ forms of assessment such as traditional written exams are employed as these tend to “concentrate upon a narrow view of student achievement” (Marsh, 1997, p.56). In this final area of commonly contested territory I overview these broad ideas about the role of assessment and examination in the school system, as well as more specific thinking about the NSW curriculum landscape and about assessment in HSC English.
In a research project looking at the link between examinations and inequality in Australia in particular, Teese (2000) explores the ways in which choices about syllabuses and their examination result in increased social power for a privileged group that are more likely to gain academic success. The research project documented the way in which students with the “fewest family advantages entered schools with the fewest facilities and encountered the least experienced staff” (p.31) resulting in a low level of academic security for such students. Teese also argues the existence of a ‘curriculum hierarchy’, in which it is not just “any subjects that occupy the top levels of the curriculum, but those that give the greatest play to the economic power, cultural outlook and life-styles of the most educated populations” (p.197).
In the specific case of English, and of particular interest for research examining the NSW HSC English syllabus and its inclusion of a broader range of texts for study, Teese argues that the removal of canonical texts from the curriculum does not “free students from the cultural world in which Shakespeare was venerated” (p.45). Examination requirements themselves can also be seen as discriminating between “sophisticated” and “pedestrian” styles of written response (a phenomenon that is also explored in the work of Rosser, 2002), preferring responses that demonstrate not just a mastery of skills and content knowledge, but also showcase creativity and moral sensibility. Green makes a similar point in his discussion of the influence of postmodernism on advancing English teaching for critical consciousness and change, explaining that “the emergence of a more radically and socially-critical version of English teaching along these lines is still linked to particular, and arguably limited, understanding of culture and society” (Green, 1995, p.405).
Resources such as the OECD scenarios for future schooling discussed at the outset of this chapter provide one avenue for holistically pursuing curriculum change that is firmly embedded in a larger plan for system-wide change. Each of the six scenarios created by the OECD include description of four integral facets of schooling: ‘learning and organisation’; ‘management and governance’; ‘resources and infrastructure’; and ‘teachers’. Decisions relating to assessment in schooling fall under the area of learning and organisation, and systems where “curriculum and qualifications are central ideas of policy, and student assessments are key elements of accountability” (OECD, 2001, p.1) are described as part of the bureaucratic school system that forms the ‘status quo’ (scenario 1a). In this scenario the bureaucracy encourages uniformity, and is resistant to radical change – this is consistent with the findings of Green and Teese who identify curriculum hierarchies surrounding both content and assessment as barriers to realising change in the English curriculum.
While technicist forms of assessment such as traditional written examinations and mass standardised assessment are currently embedded in the educational landscape, diversity in student achievement is recognised through other discourses in assessment policy, for example in employing a distinction between summative and formative assessment. NSW curriculum and policy documents refer to these as ‘assessment of learning’, and ‘assessment for learning’ respectively and these terms are defined by the Curriculum Corporation:
Assessment of learning is assessment for accountability purposes, to determine a student’s level of performance on a specific task or at the conclusion of a unit of teaching and learning. The information gained from this kind of assessment is often used in reporting.
Assessment for learning, on the other hand, acknowledges that assessment should occur as a regular part of teaching and learning and that the information gained from assessment activities can be used to shape the teaching and learning process.
(Curriculum Corporation, website accessed May 18, 2006)
This distinction however, while shifting the focus of certain forms of assessment to acts of learning rather than accountability, does not address concerns about curriculum hierarchy, or of narrow (academic) visions for the aims of schooling.
Another important contribution to the field of assessment discourse is the notion of authentic learning, or authentic assessment. In exploring what implications this approach has to curriculum, Marsh explains that “authentic assessment encompasses far more than what students learn as measured by standardised tests or even by ordinary teacher-made tests. Authenticity arises from assessing what is most important, not from assessing what is most convenient.” (1997, p.56) Students who are learning in an environment of authenticity will undertake tasks that are more context-bound and more practical than formal exams, and which focus on challenging students by requiring analysis, integration of knowledge and invention (Darling-Hammond, Ancess, & Falk, 1995). Authentic assessment practices most closely align with the learning and organisation features of the OECDs scenario of ‘Re-schooling’, where more explicit attention is given to non-cognitive outcomes, and there is a strong emphasis on non-formal learning (scenario 2a) and quality norms replace regulatory approaches (scenario 2b). It also features in the first ‘De-schooling’ scenario (3a) where learning networks are focused on local community needs, however social inequalities are predicted in the second of these scenarios (3b) where the market determines a new educational hierarchy.
In NSW the Quality Teaching Framework is provided as a model for planning and reflecting on curriculum content choices and pedagogy. The framework, which was largely derived from the ‘Productive Pedagogies’ that were developed and implemented in Queensland as a result of longitudinal research on school reform, formally underpins teaching practice in NSW public schools by guiding teachers in the incorporation of a range of pedagogical elements in their ‘Quality Teaching’ practice by focussing on the intellectual quality in a lesson, the development of a quality learning environment, and the significance of the material learned to the lives of students. While the Quality Teaching Framework is presented as a guide to pedagogy, the implications for assessment are that although technicist forms of assessment are not precluded, pedagogic elements such as providing ‘problematic knowledge’, ‘engagement’, ‘student direction’, ‘cultural knowledge’, ‘inclusivity’ and ‘connectedness’ are more closely aligned with authentic assessment practices that flow from authentic, context-bound learning.
Such aims to provide a quality learning environment in NSW stand in stark contrast to accounts of high-stakes testing in international contexts. In an account of assessment in the context of the 1970s, Dixon explains that in the U.K. especially “the tradition…is for preparation for the specialised uses of language demanded by the examination to be fed back into the normal course…the examination itself begins to look quite normal, and English becomes a weird kind of game”, and he also quotes an observation made by Walter Loban at the 1966 Dartmouth Conference: “the curriculum in the secondary school inevitably shrinks to the boundaries of evaluation; if your evaluation is narrow and mechanical, this is what the curriculum will be” (Dixon, 1975, p.93).
In more recent research on English teachers’ rhetoric and practice, Bousted (2000) confirms that English teachers in the U.K. continue to view timed examinations as “[limiting] the opportunities for pupils to formulate a personal response to a literary text” (p.13). Teachers interviewed and observed for the study also argue that exam-based assessment had led to the adoption of poor pedagogical practices, such as rote learning and the concentration on a narrow range of curriculum content (p.14). Research by Darling-Hammond in the U.S. found that even when authentic assessment practices such as performance-based rather than standardised testing were employed, the continued use of assessment results to ‘sort students and sanction schools’ rather than to ‘support student-centred teaching’ resulted in the perpetuation of social inequity (Darling-Hammond, 1994, p.25).
Whether authentic learning and assessment, and a balance of assessment for and of learning is something that is realised in the NSW HSC English classroom to support student-centred teaching is one aspect of the curriculum explored later in this dissertation through analysis of the collected data. Recent research on Year 12 students in NSW by Ayres, Sawyer and Dinham (1999) suggests that high-stakes examinations do not inhibit best-practice teaching, as generating understanding of the subject remains teachers’ paramount concern. This research however only involved the observation and interview of teachers of high-achieving Year 12 students (those scoring in the top 1% of the state in particular subjects), therefore, while it may be concluded that effective teaching takes place in NSW despite the high-stakes assessment environment, it is essential to consider the effects of this environment on students who do not achieve as highly.
In relation to English specifically it is significant that an account of English examinations such as Dixon’s from over 30 years ago would still come close to accurately describing the current HSC English exam, in which students complete six questions over two written exams lasting two hours each:
The range of English activities covered by present methods of examining in the U.K. and the U.S. is extremely narrow: talk and listening is often simply excluded, and drama almost always omitted…literature is examined but the texts are not available, unseen poems may not be read aloud, an eighteen-year-old in the U.S. is given 20 minutes for a composition and in the U.K. three major essays are demanded in three hours. (Dixon, 1975, pp.92-93)
Concerns about assessment and examination therefore must be considered both in relation to their impact on pedagogy, and in terms of the adequacy of the actual examination methods utilised in realising the stated purposes of the English curriculum in the senior years of high school.
To conclude this section I return to Teese’s observations of the ways in which perceptions about the ideal student are shaped by the demands of the formal examinations they are required to take. Teese (2000) argues that formal exams in Australia have required students to ‘project an image…of the young scholar-intellectual’ (p.4) as “examiners have unfailingly demanded [academic] qualities [e.g. abstraction and concentration, sensitivity to form and structure, logical and retentive abilities, and maturity of perspective and argument], whatever the circumstances under which real students have learnt” (p.194). His findings also show a relationship between the image of the ideal student informing the nature of school examinations and attributes of higher socio-economic status, as “…elements of the scholarly disposition…are linked closely to an educated life-style and arise from the continuous and informal training given by families rather than explicit and methodical instruction in school” (p. 5). By interrogating ideals that are constructed in both public and professional discourses, the research in this thesis will reflect on the functions of schooling and possible futures that are implied in the current HSC English curriculum.
Ayres, P., Dinham, S., & Sawyer, W. (1999). Successful teaching in the NSW Higher School Certificate: Summary of a research report for the NSW Department of Education and Training. Sydney: NSW DET.
Bousted, M. (2000). Rhetoric and practice in English teaching. English in Education, 34(1), 12-23.
Curriculum Corporation. Assessment for learning: What is assessment for learning? Retrieved from http://cms.curriculum.edu.au/assessment/whatis.asp
Darling-Hammond, L. (1994). Performance-based assessment and educational equity. Harvard Educational Review, 64(1), 5-30.
Darling-Hammond, L., Ancess, J., & Falk, B. (1995). Authentic assessment in action. New York: Teachers College Press.
Dixon, J. (1975). Growth through English: Set in the perspective of the seventies. London: Oxford University Press.
Foucault, M. (1977). Discipline and punish: The birth of the prison (A. Sheridan, Trans.). London: Penguin Books.
Gale, T., & Densmore, K. (2000). Just schooling: Explorations in the cultural politics of teaching. Buckingham, Philadelphia: Open University Press.
Green, B. (1995). Post-curriculum possibilities: English teaching, cultural politics, and the postmodern turn. Journal of Curriculum Studies, 27(4), 391-409.
Kohn, A. (1999). The schools our children deserve: Moving beyond traditional classrooms and ‘tougher standards’. Boston: Houghton Mifflin Company.
Marsh, C. J. (1997). Key concepts for understanding curriculum (A fully rev. and extended ed.). London: Falmer Press.
NSW DET. (2003). Quality teaching in NSW public schools: A classroom practice guide. Ryde: NSW Department of Education and Training Professional Support and Curriculum Directorate.
OECD. (2001). The OECD schooling scenarios in brief. Retrieved http://www.oecd.org/innovation/research/centreforeducationalresearchandinnovationceri-theoecdschoolingscenariosinbrief.htm
Rosser, G. (2002). Examining HSC English: Questions and answers. Change: Transformations in Education, 5(2), 91-109.
Teese, R. (2000). Academic success and social power: Examinations and inequality. Carlton South: Melbourne University Press.
New Stage 6 (senior secondary) syllabuses were released today in NSW, and the media circus was on point.
The worst offender for misinformation was probably the Daily Telegraph, with Bruce McDougall’s piece ‘NSW Education: School syllabus shake-up promotes the classics, Shakespeare and Austen back for the HSC’ riddled with unnamed sources and incorrect claims.
Among the claims are:
- That “Shakespeare is back” (he never left – he remains mandatory study in Advanced English)
- That “Jane Austen, Charles Dickens and Joseph Conrad will become mandatory for Year 11 and Year 12” (impossible to know until the text prescriptions are released later this year, and unlikely to be true for all courses)
- That the Area of Study is “criticised by students, parents and teachers” as being tied to “woolly concepts” (name your sources or go home).
Disappointingly, NESA president Tom Alegounarias seemed to add fuel to the fire with this misleading statement:
- “In English, for example, Shakespeare or the equivalent other aspects of great literature will be mandatory.” (Shakespeare is ONLY mandatory in Advanced English, and always has been, and ‘great literature’ i.e. texts from the Western literary canon have always been studied in other courses)
Once again we heard this old chestnut:
- “Education chiefs said they had listened to sustained criticism from employers and businesses that many school leavers applying for jobs lacked basic skills in literacy and numeracy.” (does this reference to ‘sustained criticism’ mean complaints about this dating back to the early 1900s, which perennially persist despite amazing growth in youth literacy rates?)
It was a frustrating read.
Especially given that NESA had fed the media machine with statements before making the syllabuses available on their website for teachers to see first hand. PDF versions of the material didn’t come online until lunchtime, leaving busy teachers with sense of panic about navigating disparate web-only resources.
One can only hope that these spurious claims work to galvanize the profession in the coming months, as we create new resources and share fresh perspectives on the syllabus change. If conversations I had online with colleagues today are anything to go by, there is still hope for this. We are already interrogating more important aspects of the changes to consider implications, including:
- The inclusion of a ‘multimodal presentation’ assessment (will this be more than a speech-aka-essay-read-aloud with a dose of death by Powerpoint to boot?)
- The categorisation of English Studies as an ATAR eligible course (what will the impact be on Standard enrolments?)
- The increased ability to forgo completed any study of digital or multimodal texts in Advanced English (congratulations NSW, you just got a ‘Literature’ syllabus in disguise!)
Stay tuned for more analysis in weeks to come.
(Author image created using Trove map resource, Bard portrait, and news quote.)
Something I’ve been meaning to blog about for a while is the unseen labour that goes into marking student work.
It’s semester 2 marking time in Australian universities, and I’ve just finished a stack of mine. ‘Stack’ in the figurative sense, because these were a combination of learning logs and and video blogging, all submitted and marked digitally, so there were no actual stacks of anything.
Being the audience and assessor for these students’ work was a privilege, and I don’t think any teacher should forget that having the authority to do this work is always a privilege. Sometimes it is also a joy. And it is always something that we do, knowing the important positive impact that quality feedback has on student learning.
It is a labour of love, but it is a labour to be sure.
Generally a student assignment takes 30 minutes to mark. So they say. Once I get my hand in, I can usually get through an exam response in 20 minutes (they don’t tend to require any feedback), and an essay in 30 minutes, but a set of professional plans (e.g. annotated lesson plans, units of work) takes about 45 minutes and you just can’t rush it.
A typical formula for university marking in my field is that formal assessment feedback and grading for each student should get an hour of your time each semester. That’s 30 minutes for each assignment if you only set two assignments. If you want to set more assignments, it’s on you to mark them quicker. If you’re in the edu-biz, you’ll know that this is where group presentations and short response exam papers start looking attractive.
In a typical semester I have 120 students. That’s maybe 90 students in one big unit, and 30 students in a smaller unit. As a high school teacher this was also roughly the number of students I had – roughly five classes of 25 (some with 30 students, some with closer to 20, e.g. senior classes).
Using me as an example: I set two assignments each semester. And we know that on average I plan to spend 30 minutes on each.
My semester runs for 9 weeks (because it’s followed by prac.), but they can also run for 13 weeks. You can’t really set an assignment in weeks 1 or 2. If the assignment is big, worth 40% or more, you can’t really set it in weeks 3 or 4 either.
Let’s say assignment 1 is submitted in week 5. We’re expected to get work back to students in 2-3 weeks. So they say. Which puts me giving their results back to them in week 8 (a very important deadline if the next assignment is due in week 9).
30 mins each
60 hours extra work
/over 3 weeks
20 hours of extra work each week.
Add about 3 hours for each of the following:
- getting your head around the task and long times spent on first few tasks marked
- moderation with a colleague
- administration of grades, uploading feedback to LMS etc.
Rinse and repeat just one week later if you have set assignment 2 to be due in week 9.
So if you’ve got about 120 students on average, and managed to keep yourself limited to your 30 minutes per assignment in all units in both semesters, then you will have worked about 23 hours of overtime for 12 weeks out of the year.
I say overtime, because the whole time you’ve been doing this, life, and other work, goes on…
Classes still need teaching. Emails still need answering. You still have to front up to important meetings. Research papers still need writing, grant proposals still need submitting, you may be collecting research data and attending conferences too. If you’re a school teacher, it’s classes, emails, meetings, lesson prep, school dance supervision, year 8 camp, sport coaching, bus duty…the list goes on.
As depressing as this exercise is, I think we should all do the math on this for our own teaching context.
People need to know the reality of what teachers mean when someone asks them how they’re going and the only response they can muster is a stoney-eyed “I’ve been marking”.
Spouses and family members need to be acknowledged for how they support the teachers in their lives during marking seasons.
Teachers need to grasp the reality of their workloads so they aren’t taken by surprise each time the overtime cycle hits, and help each other learn how to manage the physical, mental and emotional toll it takes (or collectively rise up and change this system maybe, hey?).
And beginning teachers need to be aware of what they’re in for.
I am so grateful to my boss for giving me lighter teaching load this semester (just 35 students!) so I can focus on my research publications, but next semester I’ll have 120 students again. I can’t wait to meet them, but I sure do wish their assignments would mark themselves!
Chatting in the mid-year break with Bianca and some other PBL-peeps, this video was recommended to me. It’s only 15 minutes long, and now I’m recommending it to you too:
The video shows what can be done in a school where teachers and leaders are prepared to really let students design their own learning. Like, really let them do it.
The students in this alternative academic program design their own Independent Learning Projects (that they report on weekly to other students), as well as their own Individual Endeavours (ambitious term-long projects, e.g. learning to play the piano and putting on a recital).
Something that interested me was, about 1 minute in, one of the students explained that in the course they look at “the four main bodies of learning”:
- Social Sciences
- Natural Sciences.
Make no mistake – I was totally inspired by this video and even showed it to my students this semester. So inspired, that I changed our first assignment to be based on completion of an Independent Learning Project! But when those four areas are offered up as the “main bodies of learning”, I can already see points of tension for making this kind of program work across the board. What of the other learning areas? What of health and physical education? What of the arts? Foreign languages?
Without engaging with conversations about what is ‘essential’, ‘core’, or ‘fundamental’ in education – and working out some kind of common goal or philosophy to anchor us – I suspect alternative programs like the one featured here will (continue to) struggle to gain traction.
Although these programs aren’t (yet) the silver bullet we need to shed our teacher-centred shackles, I believe bringing these approaches into our teaching is vital.
Personal take-away thoughts:
- Students have passions and interests that they are entitled to pursue.
- Students are capable of designing their own learning, if we give them some parameters.
- Students are more motivated to learn when they have some control in devising the questions for investigation.
- Independent learning approaches seem an immediate good fit for students like this (this is is a class of nine Honours students, who self-selected into the program), but would disengaged or recalcitrant students need more scaffolding?
- Doing my own Independent Learning Project in high school was a transformative experience for me. It was called a ‘mini thesis’ by my teacher, and I chose to study the French Revolution. I did this for just one term in just one subject – surely this is achievable across the board without rethinking our whole approach to schooling?
One of my students followed up this investigation with the following juicy question:
Essential fluencies seem to structure skills within select criterion, however I am curious as to whether PBL uses these as guides (depending on the student’s PBL objective) or whether students are meant to meet all of these at different stages of their PBL (to achieve a final product)?
If this is a flexible criteria, would using a feedback grid be the most effective way of communicating the development of an idea (as it focusses less on curriculum goals, more on constructive advice)?
I decided to post my answer to part of this question here on the blog:
You’ve asked a good question about skills and standards. My understanding of PBL (and other inquiry-based models) is that assessing skills is just as important as assessing content knowledge.
There are two (opposing) axioms that relate to this:
- ‘What gets measured gets done’.
- ‘Not everything that matters can be measured; not everything that can be measured matters’.
At the moment I’m inclined to agree with the PBL movers and shakers – that developing ‘soft skills’ should be seen as a vital curriculum goal, just as important as the acquisition of discipline knowledge and technical skills. The argument here is that if we don’t find a way of measuring/assessing soft skills then teachers will continue to sideline them. Because ‘what gets measured gets done’.
The BIE crowd have developed a range of assessment rubrics for the four skills that they identify as most important to PBL specifically: creativity and innovation, presentation/communication, collaboration, and critical thinking. You can find them here:
Of course, the opposing view is that such assessment rubrics lead people to forget the second axiom ‘not everything that matters can be measured’. I know sometimes I’ve watched presentations for example that are awesome, but their awesomeness can’t be explained using the BIE assessment rubric. It’s like all rubrics actually need a criteria labelled “X factor!” for when a piece of work or project does something amazing that we didn’t plan to (or cannot) measure. And sometimes by focussing students so explicitly on assessment rubrics, they can get obsessed with how to ‘game’ the criteria to reach the highest standard, rather than taking risks in their learning to work toward a big-picture goal.
As there is no ‘Ultimate God of PBL’, we are free to use whatever framework we want to think about “soft skills”. We can take up the Essential Fluencies, we can take up the skills foregrounded by BIE, we can use the 4Cs proposed by p21.org, or we can use the General Capabilities from the Australian Curriculum.
But ultimately I’d argue that yes, whatever framework you choose, you should find a way of explaining to students the standards you are looking for on a range of criteria, for the particular project they’re working on. Assessment rubric sheets should be designed to make the criteria and expected standards transparent to the learner, and to aid the feed-forward process throughout a project as well as the feed-back process at the end of a project.
I know I haven’t answered all of the parts of this student’s juicy question, and we’ll be talking more about it in class. It may generate another blog post. In the meantime…
- How would you answer this student’s question?
- Do you agree that providing assessment rubrics for soft skills is useful for learning in PBL (or otherwise)?
It’s that time of year. Teachers of Year 12 around Australia are scrambling to varying degrees to prepare students for final assessments and exams, which inevitably involves a whole lotta marking.
Of course, all teachers have to grade student work. And they are engaged in doing this all year. But nothing beats the pedal-to-the-metal feeling of marking Year 12 practice tasks in a last ditch effort to refine their examination responses.
In particular, nothing beats the hellish pressure that exists in states like NSW and Victoria where the HSC and VCE exams respectively loom over teachers and students alike. And out of all these teachers and students, I argue that subjects that are writing-intensive (e.g. English and History) have it the toughest; if you have a class of 25 for Year 12 and it’s coming up to an assessment, teachers in these subjects are spending their nights and weekends correcting pages and pages and pages of long form expositions.
Which can leave your eyes (and soul) feeling kinda like this:
I was prompted to write this blog post after watching my friends Justin and Alex tweet about their marking yesterday:
I’ve taught for the HSC three times and this slavish marking routine is the only part I do not miss…having said that, the jolly task I have now of marking as a university lecturer has involved marking binges that certainly rival the pain of HSC workload.
The question is – what can we do about it?
Is there anything we can do about it?
Some ideas that I threw out into the twittersphere yesterday seem promising, but without a class to try them on I’m at a loss, not sure if they would work. The ideas I bounced around with Justin and Alex were:
- Focussing on writing just the introduction, or a body paragraph. This would make the task smaller and more focussed for students, and more manageable to mark 25-30 of them.
- Setting a paragraph writing challenge. To address Justin’s problem of the student that only writes about ‘tone’, each week set a different language feature/form for students to write a paragraph on. By the end of the term they will have a bank of paragraphs on different elements.
- Gamify the writing process. This could be done by putting students in groups, getting every student to write a paragraph (or essay), then each group submits it’s best one (as judged by the students in the group) for marking. This means you only have to mark one essay/paragraph per group, not per student. Keep a chart of which group wins each week and award them a prize at the end of the unit. Change the groups around for each new unit.
- Peer assessment. This can only be used in a limited way, as students don’t have the capacity to grade work to a Year 12 standard. However you could use the ‘medals (feedback) and missions (feedforward)’ framework that Bianca draws on to give students a direction. I think the main benefit is that they read each other’s work and discuss their strengths, not that they actually give each other a ‘grade’.
- Find an authentic audience. Partnering up with another teacher/class would provide an avenue for students to share their work with another class on a platform such as a wiki. This would give students someone to perform for besides their own teacher, which could prove motivating. The teachers could also arrange to do a marking-swap, and grade each other’s student essays…this may get you writing less comments, marking more objectively (?) and just plain old provide a change of pace as you get to read a different set of handwriting!
I really hope these ideas are useful to someone out there.
If you have any other good ideas for getting feedback to students without going through so much of the eye-bleedingly painful million-essay marking process, I would LOVE to hear them!
Thanks to Justin and Alex for inspiring this post and helping me brainstorm ideas 🙂
Images: Cropped screen still from True Blood, Season 5; Screen shot of conversation on Twitter.com
Postscript: If you liked this post, you may also like the post Matt Esterman wrote today, ‘The home stretch for Year 12’. Looks like we all have Year 12 on the brain this weekend!