After a few months as new graduate trainee with a large organisation, I was despatched with my fellow recruits for a few days to the Lake District for what was referred to as “Outward Bounds”. We were set various challenges, like finding our way to a hut where we spent the night and an elaborate game where we had to carry out tasks to simulate production of goods by a business, punctuated by sessions where we sat down and talked about how we had done as a team. I can’t say I enjoyed it. It rained the whole time, the instructors were rude to us, and we ended up arguing with each other and counting the hours until we could go home.
I don’t think I learned anything from all this, apart from the need to make sure you have weatherproof gear if you go to the Lake District. No one ever explained to us why we were going on the course and to this day I don’t really know what the organisation was trying to achieve by sending us there. My guess is that someone senior thought, without much basis, that it would help us “learn teamwork” or something like that.
This is a pattern I have seen again and again through my business career. Initiatives and are introduced and decisions made without anyone really defining what problem needs solving, let alone whether there is actual evidence which supports the solution being proposed. This has been recognised by several academics and I was delighted a few years ago to come across a book by Jeffrey Pfeffer (2007) from Stanford, which uses hard data to offer practical advice to managers, and explode myths. For example, he demonstrated why total shareholder return was a meaningless and destructive performance measure before the financial crisis made this obvious (and it still remains widely used despite the damage it has caused). Professor Pfeffer and colleagues have set up an excellent website devoted to evidence-based management. And other academics also do a good job of making research relevant to managerial decision-making (for example, Vermeulen, 2010 and Rosenzweig, 2008).
It is an uphill battle, and there are reasons for this. Business is a broad field and there are profound differences between organisations. It does not lend itself well to statistical trials, and there is no set body of training required to become a manager. There is a also a strong tendency to go with the herd (which is particularly well dissected by Prof. Vermeulen). Nonetheless, my own view is that, before introducing an initiative, every business manager should be at least considering two questions:
1) What problem am I trying to solve?
2) Is there any evidence available that helps guide me as to whether this will solve my problem?
My experience is that the first step is often not taken, and if you don’t define the problem you can’t even consider whether a particular action will offer a potential solution.
Evidence for technology-enhanced learning
For all these reasons, I was particularly interested by an article we have just read in our course that looks at the idea of “technology enhanced learning” and considers whether the evidence supports the value judgement underlying that phrase – the idea that technology enhances learning (Price & Kirkwood, 2010). Based on a literature review, they found the area problematic, beginning with the whole idea of “evidence” in this field:
“Evidence may be understood as the demonstration of a truth, but the interpretation of truth as objective, subjective, absolute or relative, influences what is acceptable as truth and hence evidence.”
This seems to be a pretty good summary. The dictionary definition of “evidence” illustrates further:
“1. that which tends to prove or disprove something; ground for belief; proof.
2. something that makes plain or clear; an indication or sign: His flushed look was visible evidence of his fever.
3. Law. data presented to a court or jury in proof of the facts in issue and which may include the testimony of witnesses, records, documents, or objects.”
In a field like education, or management, it will be rare for anything to be “proved or disproved” for all time, but the search is definitely worthwhile and may well lead to better practice. The fact is that evidence in education can take a variety of forms – Price & Kirkwood divide it into three categories:
- Accounts of innovation, including anecdotes, observations and questionnaire data
- Lessons learned from implementation, including quantitative and qualitative data
- Changes in practice – evidence used to drive a change, followed by an evaluation of its effectiveness
The studies reviewed overwhelmingly fell into the second category, although the quality of evidence collected was poor. The problems seem to start at the beginning:
“In many of the studies there is no indication of the rationale, i.e. what prompted the innovation, other than a desire to experiment with a particular technology or tool. Few describe a teaching or learning issue that needs to be addressed and hardly any examine educational problems or opportunities that their particular students are facing.”
In other words, they weren’t defining the problem and in that case you cannot even start to demonstrate whether the problem has been solved or not. This is why I was so attracted to Digital Study Halls, an Indian project I previously blogged about. It seems to me they took a clear, well-defined problem, which was a chronic lack of skilled teachers in rural India, and addressed it using technology, in this case DVDs. A follow-up study (Sahni et al, 2008) showed the dramatic impact this had on student test scores, which is not the only measure of improvement in education, but was a reasonable one in this particular case. But such carefully thought-through initiatives, followed up with research on effectiveness, are rare.
A way forward
So here is what I would look for before launching an e-learning initiative. Firstly, let’s clearly define the problem to be solved. It is lack of student access to resources? Poor academic progression among certain groups? Lack of student engagement? High cost, which excludes certain people from educational opportunities? Or something else? Once we have defined the problem, we can look for any existing evidence (which could include any of the categories above) that the initiative can help solve our problem, if anything similar has been attempted before. Where applicable, numbers based on outcomes (for example, as shown by the Digital Study Hall team) will tend to be more convincing than other forms of evidence such as survey data showing perceptions, or anecdote.
If there is no real evidence that should not necessarily stop us, but it perhaps the initiative should be small-scale pilot, with limited costs and inconvenience if it does not work, but also plans to disseminate lessons learned whether it succeeded or not. In other words, we need a commitment to evaluation and to learning from experiments. Then perhaps we will realise much more of the potential technology has to improve learning.
Pfeffer, J. (2007) What were they thinking? Unconventional wisdom about management, Boston, Harvard Business School Press
Price, L. and Kirkwood, A. (2010) ‘Technology enhanced learning – where’s the evidence?’ in Steel, C.H., Keppell, M.J., Gerbic, P. and Housego, S. (eds) Curriculum, Technology and Transformation for an Unknown Future, Proceedings ASCILITE Sydney 2010, 5–8 December 2010, Sydney, Australia; also available online at http://ascilite.org.au/ conferences/ sydney10/ procs/ Price-concise.pdf (accessed 24 February 2012)
Rosenzweig, P. (2008) The Halo Effect: How managers let themselves be deceived, London, Simon & Schuster
Sahni, U., Gupta, R., Hull, G., Javid, P. Setia, T., Toyama, K., Wang, R.. (2008) Using Digital Video in Rural Indian Schools: A Study of Teacher Development and Student Achievement. Annual Meeting of the American Educational Research Association. New York City, March 2008.
Vermeulen, F. (2010) Business Exposed: The naked truth about what really goes on in the workd of business, Harlow, FT Prentice Hall