A Critique of Generic Learning Outcomes

Generic Learning Outcomes (GLOs) are intended to enable cultural heritage organisations to be aware of the effectiveness of the environment for learning that they provide and to provide quantitative evidence of the impact of museums libraries and archives on learning, nationally. This paper presents a logical critique of their strengths and limitations and proposes an alternative approach to assessing learning impact in lifelong learning contexts based on the five key learning activity types defined by Laurillard. GLOs are subjective, post hoc measures of factors only indirectly related to learning. Using Laurillard’s framework, the variety and the heterogeneity of the informal lifelong learner group can be comfortably accommodated by a small number of learning experience types that can provide the foundation for specific "predictive" learning outcomes. So the risk of developing learning activities that do not work well can be avoided.


Introduction
Generic Learning Outcomes are a set of measures for assessing the impact of cultural heritage institutions on their visitors.They were developed in the UK by Re:source: The Council for Museums, Archives and Libraries (now MLA) to enable organisations to be aware of the effectiveness of the environment for learning that they provide and to provide quantitative evidence of the impact of museums libraries and archives on learning, nationally (Hooper-Greenhill 2002).
The potential for museums, libraries and archives to support and encourage learning has long been recognised.For example, as long ago as 1942, The Committee on Education of the American Association of Museums recommended that museums should base their future identity and purpose on education (Low 1942).This view was echoed 40 years later when the American Association of Museums' report of the Commission on Museums for a New Century (American Association of Museums 1984), concluded that education would be among the primary issues facing museums in the 21st century.This role has acquired greater significance as national governments have come to appreciate the social, cultural, economic and individual benefits that flow from lifelong learning.In a joint introduction to the Department of Education and Employment's report on the learning power of museums, the then (2000) UK Secretaries of State for Education and for Culture Media and Sport said: "Learning is at the heart of this Government's agenda because it is the key to a rich life for individuals and prosperity for the nation….the Government is seeking to create the 'learning habit' across the country, so that people of all ages can understand and enjoy the great cultural achievements of the past and the present, and gain the skills, attitudes and knowledge they need to contribute to and share in the information and communication age of this new century."(Smith and Blunkett 2000).
Cultural heritage institutions are increasingly seen as instruments for government policies on social inclusion, cohesion and access (Lawley 2003;Sandell 2003); and required to present evidence of their performance (Hooper-Greenhill 2004).In the UK this has become a serious issue for publicly funded museums, galleries and archives.Funding levels across the sector are contingent on being able to present such evidence (Selwood 2001).
In 2001 the UK Museums Libraries and Archives Council (MLA) noted that museums "can make a real difference to people's lives by using their collections for inspiration, learning and enjoyment" and that "Museums are being reinvented as physical and virtual spaces in which people engage and learn, interacting with objects and discovering their stories.Interweaving the real and the virtual creates a powerful brand, enabling museums to occupy centre stage in cultural cyberspace."(MLA 2001).One way of measuring impact is to examine visitor numbers.By 2006, 86% of UK museums were used by formal educational groups and 88% by informal education groups, 40% of museums reported outreach to community groups, and 29% of museums reported outreach to older visitors (MLA 2006).However visitor numbers are only a small part of the picture, they do not tell us about learning impact.It was against this background that Re:source: The Council for Museums, Archives and Libraries commissioned the research that led to the formulation of the Inspiring Learning for All framework of Generic Learning Outcomes (MLA 2004).

Generic Learning Outcomes
Learning outcomes are the result or consequence of some learning activity.The Inspiring Learning for All framework of Generic Learning Outcomes comprises five categories of results: knowledge and understanding skills attitudes and values enjoyment, inspiration and creativity action, behaviour and progression Each category relates to a different kind of impact on the visitor.Thus, for example, has the visitor acquired new knowledge, do they know how to do something new, do they feel differently about something, have they had fun and do they intend to do something differently in future?It can be seen that categories 1, 2 and 3 map onto Bloom's taxonomy of educational objectives (Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R., 1956) cognitive, psycho-motor and affective domains respectively.Category 4 is apparently split between the affective domain (enjoyment, inspiration) and the cognitive domain (creativity).Category five is different in that rather than try to describe particular classes of behaviour, this category is concerned with measuring whether any behavioural change either has occurred, or is intended in the future.For example does the visitor intend to progress towards further learning, by registering as a library user, or signing up for a course?GLOs have been widely adopted by UK cultural heritage institutions.Within two years of its launch around half of museums in the UK were using a Generic Learning Outcomes based evaluation framework (MLA 2006).Box-outs 1 and 2 show typical sets of results of learning impact assessments carried out using Generic Learning Outcomes.The reasons for the success of GLOs are not hard to see.This kind of data is not hard to collect and it provides a way of unambiguously demonstrating in straightforward language the extent to which the museum experience has had a positive effect on the visitor that goes considerably beyond basic measures such as visitor numbers and "happy sheet" questions, eg."How much did you enjoy your visit to our museum today?"The results, as shown by the examples in box-outs 1 and 2, are impressive.
Box-out 1. Results of a study involving 29,701 school students' contacts with UK National and Regional Museums 91% of KS2 and below pupils agreed 'I enjoyed today's visit' 64% of KS3 and above pupils agreed 'A visit to the museum / gallery makes school work more inspiring' When pupils at KS2 and below were asked if they had learnt some interesting new things, 90% of pupils agreed with this.
There were a number of questions about knowledge and understanding for the older pupils: 89% of KS3 and above pupils agreed 'I discovered some interesting things from the visit today' 77% of KS3 and above pupils agreed 'The visit has given me a better understanding of the subject' 77% of KS3 and above pupils agreed 'Today's visit has given me lots to think about' 74% of KS3 and above pupils agreed 'I could make sense of most of the things we saw and did at the museum' Source: RCMG ( 2004)

Box-out 2. Teachers' assessment of museum learning experiences
88% of all participants, including teachers, judged that pupils had learnt either something or a lot about a specified topic (n=6065).
For Key Stage 2 pupils and above 47% of pupils judged that they had learnt a lot about a specified topic (n=3993) 70% of teachers judged that pupils had learnt a lot about a specified topic (n=536).61% of Key Stage 1 pupils reported that they had learnt a lot about a specified topic (n=1413)." Source: Stanley, J., Huddleston, P., Grewcock, C., Muir, F., Galloway, S. Newman, A., Clive, S. ( 2004) Two points are worthy of note here: Firstly, none of the GLOs actually measure learning directly, rather they measure indirect factors associated with learning such as whether the experience was enjoyable, inspiring, or interesting.The closest they get to direct measurement of learning is by examining what visitors say about their own learning or the learning of those they were with.Secondly, following on from this, it is clear that GLOs are subjective measures, not objective measures of performance.As such the results should be taken with a pinch of salt where small numbers of respondents are concerned.Large numbers are needed to produce reliable results.Large numbers imply post-launch testing.Few institutions have the resources to test learning activities with large numbers prior to a public launch.Consequently we may conclude that GLOs are most effective as post hoc measures, most likely to be applied after a learning activity has been developed and made public.
This indeed is the purpose for which they were developed, but from a learning design perspective this approach has a major flaw."Actual learning outcomes in this model may not emerge until a period of time has elapsed" (Clarke 2001: 26).In other words GLOs measure what we may call "emergent" outcomes and we may not know what learning outcomes a particular learning activity generates until well after the development and delivery costs have been incurred.This is not a sound basis for investment in learning design.Ideally, in order to maximise return on investment, we need to be able to measure reliably the probable learning outcomes of specific activities before significant resources have been invested in their development.It follows from this that we need to specify the intended learning outcomes in advance and use this specification as a benchmark for testing the design as it develops.

Learning outcomes in formal education
In formal education, learning outcomes, or learning "objectives" as they were originally called, are used to assess the extent to which the learner, the teacher and the learning activity have been successful.The essence of a well-formulated learning outcome is that it should be specific, objective and measurable (Bloom et al 1956, Mager 1984).That is to say it should define unambiguously what the learner should be able to do in terms that make it feasible for themselves and others to reliably measure their performance against the specified outcome.Using specified learning outcomes, learning activities can be tested in draft form with the help of volunteers or sample learners to assess how well they work.Distance learning and online teaching institutions such as the UK Open University have used this kind of iterative developmental testing for many years (see for example Brown, S., Kirkup, G., Lewsey, M.. Nathenson, M. & Spratley, I., 1981) because courses, once offered to fee paying students, have to be robust, fit for purpose and right first time.We can call this kind of learning outcome "predictive" to distinguish it from the "emergent" learning outcomes measured by the Inspiring Learning for All framework of Generic Learning Outcomes.
So can we apply the concept of predictive learning outcomes to learning activity design in museums?Why do we need GLOs?Moussouri (2002) and others (Hooper-Greenhill 1991, 1992, Hein 1998, Clarke 2001) have argued that the museum audience is so much more diverse in terms of age, interests, knowledge, skills and motivations than a registered student that what the informal visitor wants from a learning experience is essentially unknowable in advance and therefore learning outcomes for activities cannot be specified except very broadly, "since each individual learns in their own way, using their own preferred learning styles, and according to what they want to know, each individual experiences their own outcomes from learning" (RCMG 2004).It follows from this that attempts to specify predictive learning outcomes are undesirable and impractical.Undesirable because they restrictively prescribe learner behaviours and impractical because the visitor will use learning activities in quite unpredictable ways (Hooper-Greenhill 2004)."Learners construct meaning on their own terms no matter what teachers do" (Richardson 1997).But if we cannot specify learning outcomes, then arguably we cannot measure them, and if we cannot measure them then how can we demonstrate the overall impact that museums, archives and libraries have on people's informal, lifelong learning?
This argument may be valid if we focus on what the learner learns, that is to say on the content of the learning activity.It seems reasonable to suggest that we cannot possibly know what all the visitors want from a particular exhibition in a given topic or how they will use that information.But Stephen Downes (2005) argues that learning is not about content, and that: "in the future it will be more widely recognized that the learning comes not from the design of learning content but in how it is used" If we shift the focus from the content to the user experience then the number of possible outcomes is considerably reduced.In her book 'Rethinking University Teaching', Diana Laurillard (2002:82-90) disaggregates learning into five different kinds of experience: Attending or apprehending a lesson as a largely passive recipient of information.
Investigating or exploring some bounded resource in a more active way where decisions about what to attend to, in what sequence and for how long are managed by the learner.
Discussing and debating ideas with others.
Experimenting with and practicing skills.
Articulating and expressing ideas through the synthesis of some new product.
She suggests that these five different kinds of learning experiences are best supported by different kinds of media which she characterizes as Narrative, Interactive, Communicative, Adaptive and Productive respectively.
Narrative media are essentially linear, highly structured and non-interactive.They are a vehicle for transmission of information and ideas but not, on their own, appropriate for supporting the iterative dialogue that is central to knowledge construction.Videos, animations, exhibition information panels and storylines are examples of narrative media employed by museums.Notice that, contrary to conventional thinking in the museums world (e.g.Marable 2004), Laurillard makes no distinction between exhibitions and other linear media such as film and performance.
Interactive media offer resources for learners to explore in a nonlinear way.Users can decide for themselves what to look at and in which order.It is important to understand however that in interactive media the given text, in its widest sense, remains unchanged by the user.Catalogues, databases, search engines and physical layouts of galleries, bays and shelves offer opportunities for self-directed exploration, but their contents remain unchanged by the viewer.
Communicative media are simply those that support feedback and discussion e.g.email, discussion groups, video conferencing, wikis, etc. Laurillard argues that feedback and discussion are fundamental to knowledge construction, enabling an iterative dialogue between tutor and learner through which theories and ideas are conceived, shared and transformed into knowledge and understanding.
Adaptive media are similar to interactive forms but with the crucial addition of "direct intrinsic feedback" on learners' actions.That is to say, actions result in consequences that are inherent to the task/system under consideration.A tennis analogy would be that serving a ball so that it clips the top of the net provides intrinsic direct feedback to the player about the need to raise their serve.Additional, extrinsic, commentary from a coach is unnecessary in such a situation.Simulations and hands-on exhibits that can be used to experiment with phenomena such as light, electricity, mechanics and sound are commonly used by museums.
Productive media are defined as tools that allow learners to express themselves and to demonstrate their understanding, for example through story telling, reminiscences, picture making, photograph swapping and tagging.
Finally Laurillard provides a bridge between learning experiences and the abstract notion of media forms by mapping both onto more familiar kinds of learning methods and technologies, as shown in table 1.
Table 1.Laurillard's taxonomy of educational media (Laurillard 2002: 90).Rethinking University Teaching is "the most cited and influential work on higher education teaching strategies" (Jacobs 2005) and although the book was written for Higher Education, the kinds of learning experiences Laurillard describes are broadly applicable to learning in general.Laurillard's taxonomy has previously been used to analyse online teaching (Brown and Cruickshank 2003).We suggest that it may be similarly applied to informal lifelong learning.
Regardless of the content of an activity, we can ask ourselves "what kinds of learning experiences do we wish to support with this activity?"Depending on the answer we can then select one or more media forms from Laurillard's taxonomy to provide the most appropriate kind of learning experience.Thus if we wanted visitors to a gallery to understand the objective, partial nature of historical sources we could provide them with an activity requiring them to compare and contrast different accounts of the same events written from different perspectives and another activity in which they would have the opportunity to discuss and debate proposals for remedial actions based on their readings.In this example the content is relatively unimportant, even though it may be what interests the visitor and thus engages their attention.The proposed interactive and communicative elements would provide a scaffold for the learner to build their own specific, unique, individual learning experience, without telling them what they should know or believe or even remember about the historical events at the end of it.

Conclusions
So to recap, we have seen that GLOs, developed to measure cultural heritage institutional contributions to learning do not measure actual learning, they focus instead on factors that are indirectly associated with learning.Moreover they deliberately eschew standard practice in formal education of specifying "predictive" learning outcomes in favour of measuring open-ended "emergent" learning outcomes.Thus, while they have considerable value as overall institutional performance measures, they do not get to the heart of measuring actual learning and they cannot be used predictively to assess the likely learning effectiveness of any given learning activity.
Laurillard's framework on the other hand seems to offer an alternative set of "generic" learning outcomes that could be used to guide the development of new learning activities and to provide the basis for measuring the effectiveness of those activities.The advantages of Laurillard's framework are firstly that a small number of learning experience types can result in a huge range of potential learning outcomes.So the variety and the heterogeneity of the informal lifelong learner group can be comfortably accommodated.Secondly although Laurillard's learning activity types are generic, it is not difficult to see how they can provide the foundation for specific "predictive" learning outcomes that can be used to test prototype learning activities.So the risk of developing learning activities that do not work well can be avoided.