Herding Tigers, Part II
'Herding Tigers' was held at Oxford University Computing Services on the 17 December 2003, with a theme of Best practice in e-Learning development. This was the second event of its type organised by the University's Learning Technologies Group [1], the previous year's having had a slightly different focus on raising awareness and collaborative working between e-Learning practitioners and academics. This year the day had been devised as an opportunity to discuss some of the practical challenges presented to developers when dealing with the areas of accessibility, applying learning technology standards, and evaluation.
Accessibility
The first topic tackled was accessibility, with an excellent session facilitated by Brian Kelly of UKOLN who had very kindly stepped in at the last minute. Delegates were split up into small groups and discussed their thoughts on a wide range of accessibility-related topics including the end-user experience; education and involvement (particularly of those preparing content); and useful tools and technologies. Each group fed back the fruits of their discussion to the wider group, and it was pleasing to see that there seemed to be something of a consensus as to how accessibility could and should be approached, with a particular emphasis on what was most pertinent to providing e-Learning resources.
The discussion was too lengthy to cover in its entirety here, but here is a summary of some of the main conclusions:
- Most people use the WAI guidelines as a starting point for assessing the accessibility of their site, but it was acknowledged that there are things that the guidelines do not cover, or do not cover well. In general, nobody relied entirely on a checklist-based approach, with some things always being dealt with on a case-by-case basis. This common sense approach needs to be maintained when educating non-expert content providers; ideally they should be given a sense of priority, with a list of 'must do', 'should do' and 'helpful to do' measures
- A basic accessibility strategy can cater well for most students, and most disabilities. Over and above this basic strategy, provision to try to deal with more complicated situations quickly becomes very expensive and time-consuming, (which is not to say that this should never be done)
- There were two schools of thought regarding when accessibility work should be done on a Web site: the 'just in time', and the 'just in case' approach. The former entailed building as flexible a site structure as possible, and then tweaking according to the needs of students enrolled on the course that term. The latter "just in case" route is to build the site to be as accessible as possible from the outset
- There was some feeling that accessibility could endanger innovation on the Web, but also that this need not happen. A good guiding principle reached was to make an accessible site and then make it engaging, rather than making an engaging site and then making it accessible
- The skeptics among us needed to be reminded that we are all 'temporarily able-bodied', and could easily be affected by a disability at any time (as easily as breaking your mouse finger!)
- Virtual Learning Environments (VLEs) can be problematic as content is not so easily controlled, but they can also have an enabling role in allowing groups of disabled students to be supported and to communicate with each other; (this was taken to mean the full spectrum of 'disabled groups', for instance students with poor language skills could be supported together)
- Institutional accessibility policies are a 'must' but they are not much use if not enforced in some way. This could be a difficult area, particularly in relation to VLEs. UKOLN's QA Focus Project [2] looks at how policies can be enforced. Any policy that is drawn up must involve all of the relevant groups (technical and academic staff, students and policy makers)
- There was an important reminder that our aim was actually to make the Learning Experience accessible as a whole. For students who cannot access a particular piece of e-Learning, it might help us to try to remember how we taught that concept in the old days, pre-Internet
Learning Technology Standards
For the next session participants were thrown in at the deep end with a whistle-stop tour of Interoperability Standards and Tools. There was a wide range of experience across the group and for many people it was their first experience of using standards in a practical capacity. For this reason Howard Noble of the Learning Technologies Group at Oxford had created an open-ended exercise running through the full lifecycle of a content package, with the intention that participants could carry on with it at their leisure later. Participants downloaded a content package from the IntraLibrary Learning Object Management System on the Intrallect Web site [3], and edited the content package in Reload [4], the free editor being developed at the Bolton Institute. The next step was to republish the edited package back to Intralibrary and, time permitting, to advertise it in another repository, MERLOT [5]. The practical session ended with a discussion of next-generation standards-based tools, and in particular the learning design-inspired LAMS (Learning Activity Management System) software.
The hands-on session was immediately followed by lunch, but before getting back to the discussions after the break there was an interesting and technical talk given by David Balch of Technology Assisted Lifelong Learning (TALL), another Oxford-based group. David's presentation covered the workflow that has been gradually refined by TALL [6] over the last few years to specify, produce and deliver online courses using an XML and standards-based approach. Of particular interest was their clever use of MS Word templates, using Word styles to give a familiar content authoring tools for academic authors that can then be converted (almost!) automatically into their in-house XML schema for content.
Evaluation
The evaluation session was run by Martin Oliver of University College London, with a strong component of small-group discussion in much the same vein as the accessibility session from the morning. The focus was on sharing tools and techniques and contemplating some of the political issues arising from evaluation, (an area which most of the audience found particularly useful). Evaluation was considered to be especially hard to get to grips with because strategies need to be varied on a case-by-case basis, and most people looked to using a triangulation of qualitative and quantitative methods to build up an overall picture. Producing at least some numbers at the end of an evaluation was deemed to be a necessary evil as "managers like numbers", although we were not sure what managers themselves would think to this generalisation. Tracking tools and Web logs were a popular choice for providing these numbers (particularly in the age of VLEs). However caution was advised when doing so, given that the numbers are often difficult to analyse meaningfully and do not always show what they seem to; (a hit on a Web page does not tell us if the student understood or even read the material there).
The old problem of actually getting feedback from end-users was something that nearly everybody had had to deal with at some point. A variety of strategies had been tried out ranging from the nice, (bribery in the form of HMV vouchers), to the nasty, (preventing students from accessing course work until a feedback form was received). Not surprisingly there were doubts about the quality of feedback that might be received using some of the more extreme methods! A cheering thought was that in some cases at least, the quantity of feedback was not necessarily all that important, provided that the feedback that was received was useful. Considering motives (political and otherwise) behind carrying out evaluation work was another useful issue raised. It was pointed out that an evaluation that is intended to get together ideas to really improve a Web site might happily result in a list of problems, but if your intention was to get together evidence to show your project in a good light then a list of problems and requests for changes would be not be best information to gather.
Conclusions
The closing discussion was led by Stuart Lee, Head of the local Learning Technologies Group. For a day involving nearly all people on the technology side of resource development, together with a smaller number of academics among them, both parties thought that there was need for a shared vocabulary to link these two groups, and also staff development/educational specialists (a task that JISC is taking on in its Pedagogies for e-learning Programme). A new pedagogic trend was identified, that of creating e-Learning experiences, (rather than the conventional approach of teaching). Students are driving this agenda themselves with their demands for increasingly more sophisticated computer-based resources.
Overall the day was very enjoyable and provided a great deal of food for thought. One of the best points of the day was the opportunity to meet and share with others 'in the same boat'. As one of the participants said: 'There are like-minded souls out there'.
References
- Learning Technologies Group http://www.oucs.ox.ac.uk/ltg/
- QA Focus http://www.ukoln.ac.uk/qa-focus/
- Intrallect http://www.intrallect.com
- Reload http://www.reload.ac.uk
- MERLOT http://www.merlot.org/
- TALL Development Tools http://www.tall.ox.ac.uk/tallinternet/projects/projects_development-tools.asp