Max Landsberg, The Tao of Coaching

A four part coaching course was my CPD highlight of last year, and I have been enjoying using my coaching skills at work, both with colleagues and older students.  A year on from the course, seemed like the right time to reinforce and develop my understanding of coaching by doing some more reading on the subject.  I will admit to some initial scepticism about this book, which interleaves coaching techniques with the story of Alex, an employee negotiating his way through the corporate world, and learning about coaching along the way.  I have an instinctive dislike for never-quite-plausible sounding ‘real life’ stories intended to impart a message to the reader.  But I must admit that the stories did help me remember the different coaching techniques – there really was a time when a sixth form student came to me with a problem and I thought: ‘I need to use that coaching strategy that Alex used with Mary’.  The chapters are short and to the point.  The different coaching techniques are all really clearly explained, with the use of diagrams, as well as written explanations.  Landsberg also makes it clear when and why you should use the different techniques.  I borrowed the copy I read from the library but I will be buying my own because this is a reference book to have to hand.

National Center for Educational Research, Organizing Instruction and Study to Improve Student Learning

This article describes itself as a ‘practical guide’ and it is precisely that.  It provides:

(1)  A set of recommendations for teachers wanting to help their students study as effectively as possible.  Most of these also pop up in Pomerace, Dunlovsy and Rosenshine e.g. spaced learning and asking probing questions but there is also a section on helping students to use their own time effectively which I felt said something distinctive.

(2)  An assessment of the strength of the evidence for each technique.  The positive evidence for each technique is rated as either low, moderate or strong.  A low rating should not be misunderstood as indicating that a technique can be discounted.  What it means is that the expert who devised or advised this technique did so on the basis of work in a related area, as opposed to direct educational research.  A rating of strong, on the other hand, means that the evidence came from randomised control trials or psychological experiments.

(3)  Checklists for teachers to follow when using the recommended techniques.  For example, when interleaving worked examples and problems for students to solve the checklist suggests: ‘As students develop greater expertise, reduce the number of worked examples provided and increase the number of problems that students solve independently.’  For those wanting to combine graphics with words the advice is: ‘When possible, present the verbal descriptions in an audio format rather than as written text.  Students can then use the visual and auditory processing capacities of the brain separately rather than potentially overloading the visual processing capacity by viewing both the visualisation and the written text.’

(4)  Solutions for the ‘roadblocks’ which might emerge when using a particular technique in the classroom.  Do you feel that your colleagues might object to providing lots of worked examples on the grounds that all the students will do is learn the answers to these specific problems rather than master ‘the underlying concepts being taught’?  If so, it is worth knowing that when you interleave solved and unsolved problems students ‘pay more attention to the worked examples because it helps them prepare for the next problem and/or resolve a question from the last problem’.

(5)  An appendix with detailed information on the studies on which the recommendations in this article are based.  This allows readers to judge for themselves the relevance and value of particular studies.  It also provides them with a better understanding of how educational research is being carried out and exactly why teachers are being recommended to do particular things. It’s good to know what works but it is better to know how and why it works, and this article will tell you all those things.

 

D. Christodoulou, Making Good Progress? The future of Assessment for Learning

What does assessment mean to most of us?  It probably means:

  • Assessment for Learning
  • awarding levels and implementing markschemes that have been devised by someone else
  • Writing assessments intended to mimic those produced by exam boards in the hope that these will help prepare students for their GCSEs and A levels

Most of us probably share a frustration with markschemes and levels, and have a least some idea of the ways in which they are flawed.  What we haven’t thought about how to make an alternative system work.  Fortunately, in this book, Daisy has done the thinking for us, and we should be grateful for that because assessment is a vast and very complicated topic.  We need the cogent and insightful answers that Daisy has provided to the following questions:

  • How can we make assessment for learning successful?
  • Why do we need to use different tasks to assess formatively from those we use to assess summatively?
  • How do we close the gap between what students know in theory and what they can do in practice?
  • Why is it a bad idea to use examination grades to measure progress in the classroom?
  • How can we improve assessment and reduce teacher workload simultaneously?

I am not going to summarise Daisy’s argument here because she has already done that in a series of blog posts, which are far more articulate than anything I could write on the subject.  Her blog posts can be found at: https://thewingtoheaven.wordpress.com/.  When you have read the blog posts, read the book, which contains the detailed explanation you will need to engage with if you are properly to understand Daisy’s argument.  Both this book, and conversations that I have had with Daisy over the years, have inspired me to make changes to the way I assess students.  None of the changes were difficult to make, and all were positive and fruitful, which is why I would urge everyone to read this book with an open mind, and think carefully about what they should change about their assessment methods.

 

Image result

J.A. Baxter, Teaching Girls to Speak Out: The Female Voice in Public Contexts

I rather wish I hadn’t read this because its style is pretentious and it’s rather out of date.  I also felt that it lacked in advice, but it did make me think about a debate that I teach as part of the A2 Politics Feminism module.    The debate is about whether or not women should make a conscious effort to behave more like men in the workplace.  In Lean In, Sheryl Sandberg argues that women who want to be successful need to adopt certain ‘male’ characteristics, but other feminists, including Ananya Roy, argue that women should not change because men need to accept and appreciate them as they are.  I’m not sure which of these positions I support.   My suspicion is that following Sheryl Sandberg’s advice will yield quicker results for individual women but I also agree with Roy that women should not feel obliged to change.  ‘Male’ ways of doing things are not necessarily the best ways or the ways we want the next generation to replicate.  I wish I could say that this debate is becoming obsolete because there is increasing acceptance (a) that women are as capable in all contexts as men and (b) that things do not need to be done a particular way.  I think, though, that that conclusion is premature, even if more true than it used to be.  I hope that by teaching girls all the subjects, knowledge and skills that their male counterparts learn, and by encouraging their confidence and self belief, we are helping to lay the groundwork for a totally equal society.  But I don’t want to teach my students, either implicitly or explicitly, that success will come more easily in lots of fields if they behave more like men.  I think part of my job is to champion girls as they are, not to encourage them to change to suit working practices that are more established and unquestioned, than desirable or helpful.  So, I will continue to explore this debate with my students, but I will also continue to tell them to assess every situation and solution on its merits, and not to think that the norm is necessarily the best.

Some more thoughts on educational research

Last week, I wrote about the difficulties of educational research, and why we need more information on the circumstances in which research has been conducted.  After I had written that article, Daisy (Christodoulou) pointed out that E.D. Hirsch makes a related point in his article ‘Classroom results and cargo cults’.  Hirsch discusses a study which seemed to show that reducing class size improved educational outcomes for early years students and decreased the achievement gap between students from higher and lower income families.  The study prompted policymakers in California to spend $5 billion dollars on reducing class sizes in their state. Despite the injection of funds and the apparent evidence that reducing class size worked, the results in California were disappointing.  Hirsch argues that the reason that they were disappointing was because the researchers who undertook the original study did not provide a ‘theoretical interpretation of [their] own findings’.  They did not, for example, include any systematic analysis of why smaller class sizes seemed to benefit younger students more than older ones.  The policymakers in California who read the study therefore concluded that reducing class sizes was sufficient to achieve the gains they sought.  They did not ask what else they might need to do alongside reducing class size because no one had suggested to them that this was an issue that they needed to consider. This leads Hirsch to conclude that studies which report improvements in educational attainment should be accompanied by an academic exploration of the precise causes of these results.  I can recommend reading Hirsch’s article in full.  It can be found here: http://www.hoover.org/research/classroom-research-and-cargo-cults.  Anyone interested in how we use evidence in teaching should also read Daisy’s article on why the evidence from randomised control trials, although useful, is less useful than scientific evidence on how the brain works:  https://thewingtoheaven.wordpress.com/2012/02/26/different-types-of-evidence/.

 

 

 

 

Educational research: what do we need to know?

The Christmas edition of the Economist contained an article called ‘Animal Factory: the evolution of a scientific meaning’.  It was about the difficulties of conducting experiments on laboratory mice.  I learnt that ‘Not all mice are equal, even if their genomes are.’  Two sets of littermates that have been raised apart will respond differently to the same experiment.  Some laboratories only do experiments on male mice, but what works for the gentlemen does not always work for the ladies.  In fact, what works for males in single sex groups, sometimes fails to work for males in mixed sex groups.  Why is it so difficult to reproduce research findings?  In part, because many journal articles omit ‘crucial details’ about the way in which experiments have been conducted.  Carelessness explains some of these omissions.  Sometimes, scientists fail to include information about conditions that are known to be significant.  At other times, information is omitted because of an assumption that it is irrelevant.  It is becoming increasingly clear that it isn’t, not least because attempts to reproduce research carried out by one laboratory in another often fail. This occurs even when all the scientists are using the same ‘reliably uniform’ mice from the same source –  the Jackson Laboratory in Bar Harbor, Maine, one the world’s biggest suppliers of laboratory mice.  If the mice do not get near identical treatment in their different labs, the results from the experiments will not be the same.  In the past, it has been assumed that the precise details of how mice were fed and cared for was irrelevant to the outcome of an experiment.  This is now increasingly being shown to be a misplaced assumption.  For example, a 2014 study ‘showed that mice in pain … experienced extreme levels of stress if the researchers handling them were men, but not if they were women, a difference no-one had thought to look for, or report.’  Furthermore, there was a heated debate in the scientific community last year about the impact on studies of the ‘lab mouse’s microbiome, the bacteria that live in and on it.’

Why am I telling you about the difficulties with conducting and replicating experiments on laboratory mice?  Because the first thing I thought when reading the article was, if it’s this complicated with mice how much more complicated it is with children?  The second thing I thought was that this suggests that we need to have a lot of information about the circumstances in which educational research is being conducted, information which we would not always think to request.

I teach four Year 7 classes this year.  All four groups are at the same school, they all contain a similar ability range, all are following the same curriculum, and all are taught by me using the same lesson plans but, as we all know, a wide variety of other things can have an impact on how well the individual lessons go.  Is the lesson in the morning or the afternoon?  (I teach one of my groups on a Monday and another last thing on a Friday.)  What lesson and teacher have the pupils just come from?  Who are the dominant personalities in the class?  Are they broadly on side or do they wish they were elsewhere?  What is the weather like?  (A class that have been cooped up inside all day because of the rain will always be a little restless, and I find that windy days seem to send classes a bit bonkers but that might just be me.)  The behaviour of individual pupils varies depending on what mood they’re in, how many doughnuts they ate at break time, what another pupil or teacher has just said to them, and what’s going on at home.  And I don’t want to imply that I’m the one constant in the room because, however, hard I try, I am not perfectly consistent.  The last interaction I had with a colleague, pupil or parent can make a difference to how I feel and probably affects the person I am in the classroom, however much I don’t want it to.  I always try to teach as well as I can, but it is likely that my performance on a Friday afternoon compares unfavourably with my abilities on a Monday morning.

The important point is not that conducting educational research is fraught or that drawing conclusions about the effectiveness of different teaching methods is impossible.  The fact that we can’t exclusively compare classes of identical twins at identical schools isn’t a reason to reject research.  Conducting educational research is hard but the problems that come with it are well known, and I am sure that the best researchers work hard to control for them.  Without research we would have no idea at all what works, and that is something on which we definitely need evidence.  Once a teaching method has been tried, tested and proven we should be implementing it at every opportunity.  The fact that we may implement it better at certain times and with certain classes than others is not a fault of the method itself.  The important point is that it would be both interesting and helpful to know a lot more about the situations in which educational research has been conducted because children, like mice, are surely affected by all aspects of their environment.  Has a particular method been shown to be effective within schools?  If so, it isn’t enough to know if the schools in which the research was done were comprehensive or grammar, single sex or co-ed, in the west country or in London.  We need to be asking a wider range of questions.  Here are just three suggestions:

  • What is the ethos of the school and how is this communicated and enforced?
  • What are the school’s disciplinary systems?
  • How is the school day structured?

We need to know not just what works in education but in exactly what circumstances it has worked.  That will give us the best possible chance of implementing successful solutions that will benefit all our pupils.

 

 

 

John Dunlovsky, Strengthening the Student Toolbox: Study Strategies to Boost Learning

I teach many students who stubbornly insist that they revise best simply by re-reading their notes.  I have also known students to ignore their notes altogether and instead revise exclusively by reading the textbook, despite my warnings that the textbook in question does not cover the whole syllabus and/or includes limited amounts of the necessary detail.  I have tried explaining why re-reading isn’t an efficacious way to revise, and am met with claims that ‘it works for me’ or ‘it’s been fine for other exams’.  Last year I made a list of students who I knew were doing practice questions as part of their revision programme because I was curious to see if these students would do better than those of similar ability who refused to revise in this way.  They did.  The list was only for my own interest, of course.  I didn’t turn round to students on results day and say: ‘If you’d started doing practice questions in March, like Deidre, you wouldn’t be crying now.’  This year, I have been a bit more proactive and have provided Dunlovsky’s excellent article for my examination classes as a way of reinforcing my message about how to revise effectively.

Dunlovsky explains the benefits of both practice testing and distributed practice in useful detail.  His focus is on these two techniques because they are the two for which there is most positive evidence.  Practice testing, by requiring students to retrieve information from their long term memory, boosts this memory.  It also highlights to students what they do not yet know, and thus on what they should focus their efforts.  Dunlovsky suggests that teachers encourage students to use flashcards when taking notes.  With a key word on the front and important information on the back, they make self testing easy.  Also, if a student has a stack of flashcards, it is easy to put one to the back of the pile once the information on it has been learnt, allowing the focus to be on the as-yet-unlearnt cards.  Concerning distributed practice, Dunlovsky points out that although ‘learning appears to proceed more slowly’ the information will stick in a way it won’t if students cram.  He also points out that many students will already use distributed practice successfully.  For example, when preparing for a dance show, the participant will normally practise their routine regularly over a period of time, as opposed to simply running through it ten times the night before the performance.  Reflecting on how many of my students train for sports matches, gym and dance shows and concerts by distributed practice, I realised that I had an excellent retort to the students who claim that starting work well in advance won’t help them.

Dunlovsky also devotes some space to techniques which have emerged as ‘promising’ from recent studies but which lack the array of evidence necessary to be heralded as definitively beneficial.  These are interleaved practice (‘not only distributing practice across a study session but also mixing up the order of materials across different topics’), elaborative interrogation (‘trying to elaborate on why a fact might be true’) and self explanation (‘trying to explain how … new information is related to information that [s/he] already knows’).  There is also some discussion of (a) summarisation, which is only really useful to those who have received training on how to summarise, and (b) strategies that involve mental imagery, such as keyword mnemonics and imagery for text (‘students develop[ing] mental images of the content as they read’), which have been shown to have been shown to help students retain information in the short term but are not even applicable in many contexts.  Re-reading and highlighting are both crisply dismissed as revision techniques.  Students please take note, preferably on a flashcard.