Australian Library and Information Association
home > publishing > alj > 53.3 > full.text > Bridging the gulf: mixed methods and library service evaluation
 

Bridging the gulf: mixed methods and library service evaluation

Abby Haynes

Manuscript received March 2004

This is a refereed article

What is evaluation?

Evaluation here is concerned with efficacy and the extent to which objectives have been met within the provision of social amenities (Greene, 2000); most often using research methods to question people about a service they have recently experienced (Pawson & Tilley, 1997). Although some consider evaluation to be the third and final step in a linear model: needs assessment > planning > evaluation (Witkin, 1994), others regard it as a part of each step, seeing the development of services as a more cyclical, reflective process in which evaluation is combined with on-going needs assessment and planning.

Hernon distinguishes between the terms 'assessment' and 'evaluation' explaining that assessment is the process of gathering data, while evaluation is the final stage in which the data is interpreted and 'value judgments' are made (2001, pp94-95). This paper will argue that value judgments are made throughout the entire process of needs assessment, planning, service provision, assessment and analysis, and that these judgments strongly affect the nature of the service as well as the means of evaluating it. Therefore the term 'evaluation' will be used broadly with the implication that all aspects are underpinned by a set of assumptions and corresponding values - often at an unconscious level.

Evaluation methodology has attracted considerable attention from a number of disciplines (particularly psychology, social policy, sociology and education) and has been shown to be a highly flexible form of applied research. Depending on the approach used, evaluation can be either narrow or broad in focus, formulaic or dynamic, imposed or participative, purist or eclectic. Although many papers in library and information studies (LIS) recognise these issues and use evaluation methods to address program, service or system provision, there is a lack of discussion about evaluation itself as an information behaviour. Furthermore, the ideas that dominate LIS academic debate and research are not evident in the research that libraries conduct themselves. For the most part it seems that practitioners conduct 'evaluation', often contracting out the whole or parts to market research companies, while academics conduct 'research'. There appears to be gulf between the two practices and the ideas which inform them, despite the huge numbers of evaluations which are continually taking place within the very communities LIS research aims to improve. This is a deeply problematic situation in a discipline which is committed to 'excellence in professional service to our communities' (ALIA, 2002, point 6).

Why evaluate library services?

Improvement

The principles which inform our discipline require us to engage in on-going service improvement in order to achieve the excellence which ALIA advocates and the 'high standards of provision and delivery of library and information services' outlined by the International Federation of Library Associations & Institutions (IFLA 2003a, section: Aims). Evaluation assists decision-making by providing information about service strengths and weaknesses, indicating where successes can be built on and areas where significant improvements could or should be made, highlighting service gaps and suggesting new directions.

Accountability

Like all public and private sector services, libraries are fighting for funds and are therefore called upon to justify their expenditure and even their existence. Service efficiency and quality measurements are provided in order to petition for on-going or increased revenues, and in order to engage in dialogue with stakeholders (Sheppard, 2002).

Playing the game

But there is frequently a difference between what we say evaluation is for, and what we really use it for. Evaluation is an inherently political behaviour, often developed in direct response to the socio-economic world in which services operate, and in which the same 'piece' of information may have multiple roles (Greene, 2000). For example, Pelz (1978) regards social research as having three uses: instrumental, conceptual and symbolic. Instrumental uses affect policy in direct ways - guiding a decision or solving a particular problem, while conceptual use enlightens through new understandings. Symbolic (or tactical) uses are political - justifying action already taken or excusing inaction, 'demonstrating' supposed success or failure, or emphasising/masking significant issues. It must be acknowledged that library evaluation also takes place within this political arena and is subject to instrumental, conceptual and symbolic uses.

What are we evaluating?

Are we evaluating the service's efficiency or its quality? Customer opinion or the social value of the library within its community? Most authors distill these concepts into two classifications: customer satisfaction and service quality. Some papers use the terms interchangeably (Pothas, De Wet & De Wet, 2001), while others make a distinction, defining customer satisfaction as a time-limited, subjective reaction to recent or overall service encounters, and service quality as a 'global judgment', an objective response based on the extent to which the service meets a user's expectations (Hernon, 2001, Duffy & Ketchand, 1998). This paper supports Parasuraman's (1994, p112) revised position that customer satisfaction is an aspect of service quality - that as we try to develop better methods for understanding the value of services and ways to improve them, customer satisfaction should be one of the factors we attend to. Customer satisfaction, then, is broadly considered as the extent to which people think and feel positively about a service; but used alone it is not a good indicator for the 'big picture' of service quality (Duffy and Ketchand, 1998).

Current conceptualisations of service quality are dominated by the gap model which surveys and measures the difference between perceptions of a service and other dimensions variously identified as:

  1. 1. expectations of an ideal service (Hernon, 2001; Covey, 2002);
  2. 2. the importance of service attributes (Rodski Group, date unknown);
  3. 3. idealised expectations and importance (Parasuraman, 1994); and
  4. 4. desired service and minimal standards (Cook, Heath & Thompson, 2000).

The gap model assumes that objective standards for service quality can be revealed by comparing discrepancies between these dimensions and that this, in turn, provides benchmarking measurements that enable libraries to establish priorities, assess their own annual performance and compare it to that of other libraries.

This model tells us more about the service in comparison with what users want and therefore offers greater utility for decision-making than simple satisfaction surveys (Covey, 2002), but it does not provide us with an understanding of the library's social role or the needs of its users. This paper suggests that service quality in libraries relates to a service's value for individuals and communities - the benefits it brings and the ways in which it can enhance people's lives. As Kyrillidou puts it, 'Libraries are social institutions, being part of the social capital available to a community. As such their value needs to be articulated in relation to the value they provide to the user' (2002 p43). Therefore service quality is the extent to which a service meets the information needs of its community: not only in relation to what users say they would like when they tick boxes in a survey, but what the whole potential user community would most benefit from. Quality evaluation, then, also has to consider needs assessment.

Evaluation must engage in dialog with service objectives, but it also needs to influence those objectives. Many libraries are currently evaluating how well they meet aims which were developed by executives years previously. Mission statements and objectives should articulate community needs and evolve in response to changes in those needs. This cannot be an 'objective' process as some authors claim. Concepts such as 'social capital' and 'social inclusion' which underpin many arguments in this paper are acknowledged to be politically loaded. Services are developed within complex cultural contexts and are required to respond to a multitude of social and economic pressures. Evaluation must take account of these contextual factors if it is to provide the deeper understanding necessary for significant improvements in quality; and it must call attention to them if libraries are to maintain their integrity as advocates for 'universal and equitable access to information' (IFLA, 2003a, section: core values, 2)

So, what could we be doing to evaluate our services in relation to individual and community needs more effectively? This paper returns to basics in an effort to construct a framework for thinking about evaluation. It reviews some key philosophical underpinnings in LIS, provides a broad overview of research methods, and looks at current trends in the focus of library evaluation.

Philosophical issues: the 'paradigm shift'

In 1986 Dervin and Nilan wrote a seminal paper that documented a change in the way researchers were viewing people's information needs and behaviours. The paper explained that traditional methods were 'system-centred' because they asked questions that focused on the information service (the term 'service' is used to include human services and electronic systems); and ignored crucial issues regarding people's information needs, uses and abilities. They explained that system-centred research tends to concentrate on what people prefer, and how much or little they use particular resources; and it categorises them in traditional demographic groups such as their socio-economic status. System-centred perspectives see information as an objective 'thing' that is transmitted, and which people process in a rational and consistent manner. These underlying perspectives are generally not articulated in the research however; rather they are assumed as a shared understanding of the way people and information interact, based on the traditions of 'scientific' knowledge, with the core implication 'that by knowing how users have or might use systems, one knows what their needs are or might be' (Dervin & Nilan, 1986a, p10).

Dervin and Nilan argued there was an alternative paradigm - an alternative way of seeing and thinking about things - which offers greater insights into the complex relationships between people and information. This 'user-centred' perspective regards people as complex and sophisticated information seekers and users, actively engaged in a process of making meaning, and varying their behaviours in different contexts. Furthermore, it sees information as fluid, open to multiple interpretations and meanings dependent on when, where and by whom it is being utilised. User-centred research tends to concentrate on the processes people go through in their attempts to access information and make personal sense of it. It looks at what helps and hinders this process, and classifies people by the common information interests or needs that they have in that particular situation. User-centred approaches actively question our understanding of the relationships between people, information and services, and therefore make this inquiry a focal point of the research.

What does this have to do with library evaluation? Because our beliefs and assumptions shape our values and goals, they shape our service objectives, our day-to-day practice, and they shape the methods we use to evaluate them. Service decisions about what we should do and why we are doing it are founded on our core beliefs, our philosophies about the world. As Morris, points out 'assumptions based on the traditional model have dictated the kinds of services we supply and the kinds of libraries we have created' (1994, p21). It would therefore seem reasonable to assess these assumptions if we are to make educated service decisions.

System-centred questions help us to discover the extent to which people use a service, what they know about it, how they rate it, their preferences, and their 'profile'. There will always be a need to ask these questions and reflect on the answers as part of the evaluation process: thus they continue to play an important role in research. But they are not enough. In order to understand the value of a service we need to know its purpose: how people actually use it, what difference it makes in their lives, and why they behave and feel they way they do in their service interactions. We need to ask user-centred questions and select methods which permit users to explore these questions. User-centred philosophy not only guides our questions, it suggests that users have a significant role to play in the development of research methods and the utilisation of findings. For example, Henry (1996) offers suggestions about how communities can participate in the formulation of survey questions, and he demonstrates how user input provides contextual information about outcomes which leads to more accurate and facilitative decision-making about service improvement. Indeed, Marshall (1984) takes this further arguing that 'there is no question that the development of citizen participation in libraries must be encouraged, including the establishment of citizen advisory groups' (p277)

Yet library evaluation remains system-centred. Why? In later sections this paper will explore this question and draw attention to the gulf between academia and the workplace, arguing that user-centred perspectives and research are mainstream within academia but still appear to be peripheral within practice communities and their evaluation strategies. Does this gulf in thinking and application between scholars and practitioners matter? Of course it does. There are valuable lessons to be learnt by each party from the other - ideas that influence the direction of theory and practice, that contribute to the body of learning which promotes better understanding and educates new practitioners, ideas that promote relevant research and creative debate and, ultimately, lead to better information services. It is a two-way street: Durrance finds that '...the integration of research findings into professional practice results in improved outcomes' (2003, p549), while Ingwersen (1995) reminds us that LIS theory has developed out of practice, Biddiscombe points out that practitioners 'play an increasingly important part in the academic process' (2001, p161), and Kuhlthau (1993) explains that her academic work is founded on experience as a practitioner.

Qualitative versus quantitative methods

The primary debate in evaluation has not centred on the philosophical paradigms discussed above, but on the tools used to conduct research: the choice between quantitative and qualitative methods.

Quantitative tools gather numerical and statistical data using experiments, measurements, fixed-response questionnaires, test scoring, et cetera. The approach is underpinned by 'scientific' world views of cause and effect, belief in the objectivity of the researcher and the search for truth. Quantitative methods used in library assessment include web server statistics, electronic counters and surveys. These surveys are usually questionnaire-based and, at their best, are grounded in extensive and on-going piloting which uses qualitative methods such as focus groups and individual interviews to establish that the questions are meaningful to the user community. They often employ empirical testing which targets specific groups and topics while also fulfilling the scientific 'trinity of validity, generalisability, and reliability' (Janesick, 2000, p393).

Qualitative methods gather descriptive information using observation, case studies, reflection-in-action, document analysis, open-response questionnaires, and interviews. The approach is underpinned by social/contextual world views and the belief that interpretation and 'truth' is subjective. Qualitative methods used in library assessment include focus groups, open-response questions (usually as a component of fixed-response surveys), in-depth user interviews and reflection-in-action analysis. These open-response techniques help to explain quantitative data, provide contextual meaning, allow people to raise issues and explore complex situations. They also involve dimensions often excluded from quantitative approaches such as humour, external social factors and, particularly, emotion - a crucial dimension for library users both in terms of their information needs and behaviours (Kuhlthau, 1993) and their perceptions of the institution and the people who work in it (Radford, 2001). Davis & Bernstein (1997) found that careful analysis of a few qualitative comments provided more helpful information with which to improve service quality than all the quantitative questions used; while Lincoln argues that qualitative approaches 'illuminate aspects of libraries, library services, and library users' perspectives in ways we have not had access to in previous research' (2002, in Durrance & Fisher, 2003, p317)

At first glance it may seem that qualitative and quantitative methods fall easily into the user-centred and system-centred paradigms outlined earlier. Yet user-centred research can (and often does) employ quantitative tools as a component of its investigation, while qualitative methods can be system-focused - open-ended questions can ask about user preferences just as well as closed questions can. Nevertheless, conventionally each method is associated not only with asking different questions but with the philosophical leanings of those questions. As Riggen says, 'qualitative and quantitative methods, while not inherent to a paradigm, can act as carriers of paradigm attributes and should be selected for that ability' (in Greene & Caracelli, 1997, p92). Thus quantitative methods are more commonly used in traditional scientific inquiry which is interested in finding facts, patterns and comparative baselines, and therefore more favourable to user-centred inquiry; whilst qualitative methods are more commonly used in social inquiry which is interested in variable, contextualised human experience and therefore more in sympathy with system-centred research.

Both qualitative and quantitative methods have weaknesses as well as strengths. For example, qualitative tools oblige people to fit in with pre-determined statements limiting their answers to a tick against someone else's wording thereby preventing exploratory communication and explanation. This attempt to reduce human complexity to numerical data can result in superficial understanding and simplistic outcomes. However, qualitative research covers smaller samples which, combined with its in-depth, personalised style, means that generalisations are more difficult. It also relies more on (and is arguably more influenced by) the researcher's skills, and does not easily generate statistical data on which to base service decisions.

As Pawson & Tilley note, quantitative tools clearly define the 'what', while qualitative methods provide the 'why' (1997), and Greene & Caracelli add that qualitative approaches provide depth while quantitative approaches offer width (1997). This paper argues that neither makes sense without the other, and strongly advocates the use of creative, mixed method approaches. Witkin (1994) argues that mixed methods are also required for comparability and validity of findings because there is evidence to suggest that 'different methods produce different lists of needs ... [and] yield non-comparable results' (p24). This call for blended methodology applies equally to the paradigm debate above and to the measurement focuses below. Different approaches 'get at' different issues and both are valid when used overtly to achieve clearly defined aims. Library evaluation has employed quantitative methods to establish usage trends and baselines and to gather the demographic data which is vital for public accountability and funding. But it also must use qualitative methods if it is to gain real insights into the experience of service users and the wider community's needs.

Inputs and outputs versus outcomes

The arguments for more user-centred, mixed method evaluation are far from new. Many practice-based researchers have been working towards methods which not only capture data, but also strive to understand the quality of library services and the value they have for users.

Library evaluation has traditionally focused on efficiency calculations: inputs (resources such as funds, equipment, books, staff) and outputs (products or activities like transactions, circulation, and special programs). These measurements provide important 'snapshot' or longitudinal statistics about the daily life of the library which can be used to highlight issues such as comparable funding, staffing levels and resource changes. However, such user-centred, quantitative methodology is limiting. As Durrance puts it 'public librarians, state agencies, and the federal government have come to rely on output measures for public libraries as indicators of public library effectiveness. While the primary values of these measures are as indicators of efficiency and use, they do not reflect value gained by the user' (2003, p546).

More recently there has been a shift away from input/output focus towards outcomes evaluation. Covey explains, 'outcomes are measures of the impact or effect that using library collections and services has on ... users. Good outcome measures are tied to specific library objectives and indicate whether these objectives have been achieved' (2002, p89). Therefore outcomes are not concerned with how much users like the service, but how much it has benefited them in their everyday life or helped them with a specific goal. The approach is still quantifiable and attempts to measure dynamics rather than understand them, but it is interested in changes in a person's skills, knowledge, behaviour, attitude, status or life condition (Motylewski, 2002) such as gaining a job, getting better grades on assignments, improved confidence using software, increased understanding of a language, and can employ both qualitative and quantitative tools in its investigation.

User-centred research is, by definition, concerned with outcomes, but formal outcomes evaluation is not necessary user-centred. Outcomes are the focus of inquiry, and subject to different methods and philosophical approaches. In other words, we can assess outcomes from a system-centred, quantitative approach by calculating test scores of people who participated in a library literacy program; equally, we can take a user-centred, qualitative approach by talking with people about changes in their self-esteem and life experiences since participating in that same literacy program (see Dervin & Clark, 1987, for a practical application of user-centred library outcomes evaluation). Hernon distinguishes between these examples by calling them 'direct outcomes' and 'indirect outcomes' respectively (2002, p61). Nevertheless, overall it can be said that an outcomes focus takes us further in our journey towards user-centred inquiry because it acknowledges the user is actively engaged in a dynamic, contextualised process; not simply consuming a product.

Kuhlthau (1993) believes that librarians tend to approach users with a focus on 'product' (information sources) rather than 'process' (problem-solving). She argues that product-focused mediation assists with access to information sources, but that only process-focused mediation assists with learning from the use of information (p134). It is the focus on product, or sources which is, according to Kuhlthau, one of the chief obstacles to providing better information services. This distinction can also be applied to evaluation: both what we are evaluating and how we are doing it. The marketing emphasis treats libraries as a product, hence we evaluate the relationship between user and product; despite the fact that people actually experience libraries more as a process. Similarly, we use evaluation itself as a product, a tool to produce findings, rather than as an opportunity to engage in a reflective dialogue in which we really learn from our users.

More differences between what we say and what we do

ALIA's library standards publication, a cornerstone of much service planning in Australia, states the main elements of service evaluation are, 'community needs analysis, service objectives, input and output measures, efficiency and ultimate benefits to users' (1990, p88). It argues that objectives and planning should take account of a range of contextual and user needs, that service quality should be assessed in relation to these objectives, and that some qualitative methods should be used. Yet the standards are organised according to inputs and outputs with no further indicators of how needs evaluation might actually be tackled (but it should be recognised that this text is thirteen years old and may reflect attitudes which have moved on significantly since then). Other evaluation manuals describe outcomes evaluation and provide survey templates and analysis guidance (Hernon, 2002) but they remain system-centred with an emphasis on quantitative methods and do not offer any means for user communities or local evaluators to participate in the formulation of evaluation questions. Meanwhile LIS academia is embedded in user-centred research which strongly criticises such approaches.

Morris argues that librarians conduct system-centred evaluation despite the fact that 'we don't talk as if we hold this traditional view of information' (1994, p21). She believes the way librarians think about the relationship between people and information is intuitively more aligned with the alternative paradigm - regarding users as having active, complex information needs and behaviours, and knowing that information is not simplistically transmitted and received but complicated by many socio-cultural, environmental and personal factors. Similarly, Durrance (2003) points out there is a mismatch between library mission statements which tend to be user-centred, concerned with individual or community value-related outcomes ('to satisfy information needs, enhance knowledge, enjoy leisure and stimulate imagination') and the assessment methods they employ which focus on inputs and outputs. It seems there is a gap between the way we conceptualise our services and the methods we use to evaluate them. Thus our reliance on system-centred evaluation, which generates system-centred findings, is out-of-step with service objectives and with our core values and beliefs. This reduces our potential to improve services and enhance the lives of our users, and it also damages us by reducing the librarian's role to that of a shop assistant - a product-orientated customer service operative rather than a creative people-orientated professional.

How do these issues play out in practice? A brief look at some examples of current library evaluation in Australia may provide more context for the argument that evaluation requires a methodology which mixes each of the three areas discussed above: system- and user-centred perspectives, quantitative and qualitative methods, plus input/output and outcomes-focused inquiry. These examples will inevitably draw attention to weaknesses while ignoring strengths. This is regrettable given the enormous amount of time, energy and commitment that staff put into service planning and provision; but this paper will concentrate on areas that could be improved.

Some examples of library evaluation

Academic libraries

In this first example, a university library distributed 'client survey' questionnaires at the library entrance which were also available on their website. The questionnaire, developed by a market research company, asked a total of fifty-four quantitative questions - multiple choice or rated for perceived service importance and performance - followed by two qualitative questions asking for general comments and 'one thing we could improve'. A prize draw incentive was offered.

The questionnaire had a strong system-centred, input/output focus with no identifiable outcomes, and was a good example of Covey's argument that surveys fail to capture an understanding of why things happen because they 'cannot ... establish cause-effect relationships, and the information they gather reveals little if anything about contextual factors affecting the respondents' (2002, p7). The quantitative questions appeared to have insufficient contextual flexibility because of their poor fit - there were no questions about the library's new security measures or the 'Ask Me!' roving assistants pilot program - despite both being significant, high profile, potentially high-impact initiatives. User grounding seemed inadequate because many questions were concerned with issues of which the majority of users may have no experience, or would feel unqualified to answer, such as rapid materials processing, group study facilities, staff initiative, disability access; yet there was no 'n/a' option. Additionally, several questions could be seen as ambiguous (for example, access to electronic databases and journals may depend on where access is attempted - online at a remote location or in the overcrowded and therefore hard-to-access library computer lab). The lack of response options and comment space with which to qualify answers means evaluators have no way of knowing why questions receive particular ratings or why they are left blank (because the respondent does not understand the question, or has no opinion, or doesn't know, or believes 'it depends', or another reason altogether?). The survey's attempt to assess service quality as well as customer satisfaction by employing the gap model still does not help us understand the contextual issues which make up user needs and overall service value - that is, why things are important.

In their evaluation of reference services this library developed a very different, mixed method approach. Staff conducted a comparative survey of other academic libraries and reviewed extensive bodies of literature outlining reference evaluation models. They analysed the criteria and scope of each model before defining their evaluation aims and choosing to blend two focus group approaches which allowed them to incorporate a range of issues already identified as significant by their users such as digital standards. The focus groups were organised by different user communities, thereby recognising their potentially different needs (Whitmire, 2001), and used 'brainstorming' techniques to establish unique agendas for each and so ensure the group's issues were addressed (Widdows, Hensler & Wyncott, 1991, reflects on importance of this flexibility). However these focus groups were limited to the evaluation of reference services alone - only one aspect of the broad library service. Quality reference services are often considered to be the cornerstone of a good library, yet without an analysis of what users feel to be most important there is a danger that this attention is mis-focused. Indeed, formal library services are, themselves, only some of the many uses that people may make of the library. As Kyrillidou (2002) argues, it is likely that many users actually want to use library facilities independently - only asking for assistance if absolutely necessary (p45). We require grounded, mixed-method user needs research at a local level to find out what proportions of people use different aspects of the library, their reasons for doing so, and what would assist them further.

The limitations of pure quantitative research are highlighted by another evaluation example which took place in a different library. Having conducted web server statistical analysis, this library was able to state that their website usage was 'active', but whether this was because users were happily occupied within a well organised site or were desperately searching within poorly designed information structures (or both, or neither) they were unable to say. Qualitative, user-centred evaluation is needed to give meaning to raw data and help us design quality service improvements.

Another university library explained that they recognised the limitations of their input/output evaluation and are considering an outcomes orientated survey package next year. This package is also based on the 'gap theory of service quality' and requires users to rate their perceptions of current library services together with their minimum and desired standards (Cook, Heath & Thompson, 2000). It combines qualitative and quantitative response methods but, like many other outcomes approaches, it has a strong product-orientated market focus and emphasises cross-library comparative analysis. This assumes state or national standards can determine the needs of each user community, as if they were homogeneous across library types and locations. It does not take account of social and institutional factors which may influence services differently from community to community nor, most importantly, can it take account of services which may have been tailored to meet local user needs, possibly on the basis of previous evaluations and user feedback.

This generic instrument is gaining popularity despite the growing emphasis within LIS research on communities of practice which highlight the very different needs that people have in different contexts. Comparative standards and measurements can offer useful baselines for service improvements such as promoting better access for people with disabilities, arguing for increased funding to match that of neighbouring councils or competing institutions, et cetera. but they only go so far. Evaluation must also understand context and the uses each community makes, and wishes to make, of the service. As Hernon says, 'excellence in public libraries is to be defined locally by each library determining its role in the community through an assessment of community needs' (2002, p57). However, it should be noted that outcomes evaluation has been developed by academic and research libraries which, arguably, have a more homogeneous user community and therefore tighter goals than public libraries which, by their very nature, are generalist service providers.

Public libraries

A recent Australian public library survey consisted of nineteen questions offering a range of closed and open responses, many of which required rating on a scale of 'excellent' to 'failing' and included an 'n/a' option. The survey was only supplied in English even though seventy per cent of the local government population speaks languages other than English at home (2001 Census Data). It is not known what proportion of the community would be able to complete a written English language survey, but staff regularly interact with users who struggle with basic verbal English, so the numbers are likely to be significant; particularly as the survey includes examples of jargon ('How do you rate the OPAC?') which might even deter proficient speakers. This survey is a good example of why a system-centred approach is unhelpful. The starting point here is the library rather than the needs and experiences of the people who use it, and this has resulted in an instrument which is unintentionally discriminatory and out-of-step with the library's mission statement 'To provide both adult and children's' material in a variety of languages to meet the needs of [our] diverse ethnic population'.

Library surveys carried out in this locality in 1997 and 1998 indicated that more than two-thirds of users spoke languages other than English at home, but that usage of community language materials and survey feedback about them was low. Employees of the library are aware that the community language books are disorganised and messily shelved, often mixing fiction and non-fiction, adult's and children's books. It seems reasonable to assume that community language users would appreciate the same order as those using English resources, but that lack of complaints by users contributes to the lack of attention by staff. Apart from the obvious barrier of language, and the limitations of the methods used to consult them, there are other contextual factors which might contribute to the silence and could shed more light on their information needs. For example, what proportion are new immigrants? And how many of those are from countries where access to information may be a loaded issue? What are the experiences of the community's Muslim population in the current political climate which might affect their willingness to complain? Witkin notes that 'too many needs assessments are really opinion polls that generate wish lists of solutions to scarcely-articulated problems' (1994, p25). If we really do want to meet the needs of our users we need to find out what their needs are; and this demands a degree of user-centred research to supplement traditional evaluation.

Staff in another public library observe that a significant number of children 'hang out' waiting for their parents to collect them later in the evening, suggesting that the library is used for informal after-school care by some families, a community 'safe place'. This is a fascinating phenomenon, explored in some depth in the United Kingdom (Poustie, 2002); however this library knows very little about it, largely because they have not asked about it. In fact, they have not heard the children's point of view on any matters because surveys are only given to those aged fifteen years and over, despite previous evaluation showing that children and young people make up a higher than usual proportion of this library's users. Focus groups are now being planned for children and young people - a qualitative method which allows reflective discussion and open debate if managed well (Widdows et al, 1991). This takes the library closer to user-centred evaluation and towards addressing its service objectives regarding children and young people.

Another library has a website which users often mention unfavourably to staff when advised of online services. However, this library's evaluation survey does not ask about the website and, therefore, as far as formal evaluation goes, there is no problem with it. The survey was piloted five years ago and so is out-of-date with regard to the 'electronic revolution'. A supplementary website question could be added to the current survey without ruining comparative measurements, but user-centred, qualitative inquiry is needed to understand what users want from the site and how it can help them with their information needs.

One of the contemporary roles Australian public libraries play is that of community information centre (Giese, 2003). Many include this mandate in their mission statements and make some provision for authoritative, up-to-date information on health issues, child care, community language assistance, legal information and other social services; often producing or supplying a free community services directory. Yet seldom is this role evaluated. Is it wanted? Are libraries the right place to host it? How much and in what ways is it used? How does it dovetail with other local services? Where are the gaps? Again, we need to ask these fundamental questions if we are to find meaningful answers that provide quality service direction.

The libraries in all the examples above are staffed with LIS-educated librarians who would have engaged with user-centred research and methodological debates as a component of their training. Many have on-going access to LIS journals and work alongside LIS students who are currently engaged with user-centred studies. Yet the gulf between theory and practice continues. This takes us back to the key question: 'Why?'

The gulf between theory and practice

Some possible explanations for this gulf are considered below.

'There is always a gap between theory and practice'

Many disciplines experience the phenomenon of gaining qualifications by emphasising academic discourse, but subsequently drawing on entirely different resources in 'real life'. However, most user-centred research takes place within 'real' contexts: places such as high schools (Limberg, 1999), school and public libraries (Kuhlthau, 1993; Dervin and Clark, 1987;) and hospitals (Pettigrew, 1999). It explores everyday life situations like reading newspapers and watching television (Savolainen, 1995) and obtaining community information (Pettigrew, Durrance and Unruh, 2002); and examines life crisis situations such as domestic violence (Harris and Dewdney, 1994), health crises (Baker, 1996, in Julien, 1999) and career decision-making (Julien, 1999). These works offer valuable insights into the experiences and needs of people in many aspects of their lives. They identify crucial service issues such as barriers and aids, and the psychological factors which empower or prevent people from getting what they need. This is real life.

'Academic writing is irrelevant to practice'

Academic research is abstract, self-referential and sometimes impenetrable, creating differences of understanding between those who have strong incentives to wade though it, and those who simply want ideas to help them reflect upon and improve services. Fitzgerald (2003) argues that LIS academics are not interested in accessibility or relevance and, in fact, disassociate themselves from practice as much as possible. He concludes that academics are only writing for other academics, paying homage to one another buoyed by 'career concern and ego gratification incentives' (2003, section: relevance to other academics) and 'subsidized vacation opportunities' (Khazanchi, 2001, in Fitzgerald, 2003, section: background: relevance of research to practice). Despite this negative portrait there are notable LIS academics who are committed to applied research and whose work is evident in libraries and information services across the world; see, for example, Kuhlthau (1993), Julien (1999) and Dervin & Dewdney (1986b). A greater commitment to communication styles which allow (and even encourage?) practitioners to engage with the valuable debates currently limited to the academic elite would be very welcome. This does not require 'dumbing down', only more accountability and effort by authors to render their work meaningful.

'Academic research does not place enough emphasis on practical application'

LIS theory 'draws from practice to inform writing but fails to then develop findings of practical value' (Heeks, 2001, in Fitzgerald, 2003, p18). Academics have a duty to address practice - not only to ground a significant proportion of their work in field research, but to give applied examples and practice recommendations in what is, after all, an applied discipline. Clearly not all aspects of LIS lend themselves to service application, but even these might benefit from a 'why does this matter?' analysis: discussion of how further pursuit of the issue may enhance understanding and how this might, in turn, enable better information services. If academics are truly committed to information service quality then surely this should be reflected in the value and usability of the information they are paid to produce.

Dogma

The arguments made by LIS theorists, using 'persuasive and authoritative rhetoric [which] is hard to discount' (Fitzgerald, 2003, section: relevance to other academics), strongly suggests that they hold the view that their ideas - and their ideas alone - are correct. They effectively promote their theories as truth - even those that embrace paradigms which deny the concept of truth. This may result in part from the 'politics' of academia which make it hard for researchers to collaborate across knowledge domains and to advocate eclecticism, despite the fact that many value collaborative, mixed methods in practice. Dervin (2003) argues that academics suffer from 'extreme information overload' (section: disciplinary tyrannies, para 5) from competing, often contradictory and chaotic research findings. This causes them to become more determined in their disciplinary allegiance and to the blinkered 'tyranny of the quest for certainty' (section: disciplinary tyrannies, para 8). Interestingly, Dervin herself is a good example of this. On the one hand she argues for theory-to-practice relevance and eclecticism noting 'that there are utilities to be derived from both [user- and system-centred] orientations' (Dervin and Nilan, 1986a, p9) and, in her own Sense-Making model, she acknowledges the influence of many theories and the usefulness of both qualitative and quantitative methods (Dervin, 1992). Yet she also criticises 'misuses of Sense-Making' (Dervin, 1999, p729) and implies that it must be employed as an absolute methodology, represented fully using all its dimensions in any research, rather than using the aspects which are most appropriate to each research context.

'LIS journals are biased against practitioners'

The majority of LIS research is published in academic peer-reviewed journals because 'reward structures in academia encourage researchers to publish in scholarly journals which are read by other scholars, [yet] these reward structures breed the type of articles which are eschewed by practitioners' (Durrance, 1991, p282). McNabb argues that journals have the power to determine which ideas are valued, how they must be treated, and who is permitted to talk about them, and 'therefore, can determine whose argument is accepted (scholars, researchers) and whose is considered null and void (practitioners) by perpetuating the theory/practice split' (1999, p27). This position is supported by Fitzgerald (2003) who claims that academic journals are biased against practitioner papers and instead prefer authors who demonstrate sufficient deference to the 'boys club' conventions of academic publishing. The argument that practitioner papers are rejected by scholarly journals due to legitimate quality control is countered by findings which suggest peer-review results in inequalities, biases and an overemphasis on conservative papers (Eisenhart, 2002); also that it fails to provide quality assurance and may, in fact, simply 'pander to egos and give researchers licence to knife each other in the back with impunity' (White, 2003).

'Research and evaluation are completely different'

Some believe that academic research concentrates on the serious understanding of people's information needs and behaviours, while library evaluation simply focuses on measuring service delivery. Further, that since they are engaged in entirely different activities the gulf between them does not matter. However this paper has argued that evaluation is a form of research, the difference being that 'evaluation' describes one focused activity, whereas 'research' is simply the umbrella term. They share a common purpose - to understand phenomena more fully and to use this understanding in some meaningful way.

'Practitioners do not have the time, energy or motivation to attend to on-going research'

Running busy services with scarce resources, funding pressures and increasingly business-orientated directives is a high-pressure activity and not does support reflective practice. Additionally, there is little incentive from many library employers to engage in continued learning or to conduct creative research. These restraints, together with the issues outlined in previous sections, also limit the potential for being published and gaining external recognition. Practitioners will not maintain their links with academic research unless this is encouraged by management structures; and no amount of practitioner enthusiasm for more meaningful evaluation can effect change if higher level managers are unsupportive.

Practice complacency or defensiveness

In 1980, Garrison (in Durrance 1991) argued that both researchers and practitioners were negligent. Researchers do not disseminate their research well, but practitioners chose to ignore it. They use the 'real world' argument to dismiss research and justify an inverse snobbery which protects them from having to deal with challenges they may find too confronting. One senior practitioner, when asked if she had considered using LIS research to inform her evaluation strategy, rolled her eyes and commented she thought after graduating she would never have to read 'that stuff' again. This raises worrying questions about practice culture and also suggests that LIS courses may be failing in their efforts to persuade students of the reciprocal benefits, indeed, interdependency, of the theory > research > practice relationship.

Professional values or lack thereof

Is there is a mismatch between principles claimed by organisational bodies and the values of the average information worker? ALIA asks that its members 'observe the highest standards of service quality' by 'maintaining and enhancing their professional knowledge and expertise [and] encouraging the professional development of their colleagues...' (2000, statement 6). Perhaps information workers do not believe that academic research will enhance their knowledge and expertise and therefore look to other sources. Perhaps aspects of library culture - the hierarchical structure, the retail emphasis, the constant struggle for sufficient resources - discourages a commitment to life-long learning. Or perhaps the stereotype is winning out and librarianship simply attracts people who dislike creativity, challenge and change.

'Libraries are embedded in tradition'

Libraries are not engaged in reflective practice and simply do what they have always done because, tautologically, they have always done it that way: gathering data 'for historical reasons or because they are easy to gather, rather than because they serve useful, articulated purposes' (Covey, 2002, p2). Given the accountability pressures on most libraries, public and private, this seems too simplistic. It is likely that traditional methods are favoured partly because they are familiar, a known quantity, and partly because evaluation in the workplace provokes anxiety: not only because staff may feel judged, but because identifying areas for service improvement may lead to more work in an already pressurised work environment. Quantitative methods which focus on input/outputs and customer satisfaction are 'safer' because they are unlikely to reveal anything surprising.

'Libraries exist in a context which is embedded in tradition'

Primarily though, traditional methods are probably favoured because the world view of library staff and the people to whom they supply their data is traditional, based on the dominant paradigm which tells us that that 'scientific' methods reveal truths, and that results must be measurable and testable if they are to be of any value. Higher level policy makers, budget holders and data analysts do not tend to value research which addresses abstract concepts such as feelings, life enhancement and community value. Even though the way they talk about service quality may sound user-centred and engage with concepts such as social capital, they tend to favour 'hard facts' gathered and presented in a traditional manner with which to justify business plans to their superiors and the public. Thus libraries are encouraged to produce 'symbolic' data which supports their political agenda and which, in turn, is used by higher management to support theirs (but it should be noted that qualitative, user-centred evaluation can be used tactically also; for example, a strong case involving economics, child protection and community safety might be made for better understanding and improving the after-school care role of public libraries already discussed). Against this background we can see that library practitioners are required not only to remain engaged with research developments and to brace themselves for increasingly challenging practice requirements, but are in the unenviable position of having to persuade policy makers and funders of the limitations of system-centred quantitative approaches and the desirability of alternative evaluation methods - at potentially more cost. A big ask.

Evidently none of these possible explanations answers the question by itself. But when viewed together they suggest general areas for further reflection and change: especially in regard to the contributory roles of the major parties within this dynamic - academics, practitioners, and higher level policy/funding bodies. There is a need for more open communication, greater honesty about what we do and why we do it, demonstrated sensitivity to the role limitations of each party, more creative input about how these limitations might realistically be addressed and, finally, a commitment to the needs of the people whom we are all attempting to serve in our work.

So what should we be doing about library evaluation?

We need to understand, acknowledge and harness the benefits of mixed method, eclectic approaches to library evaluation. System-centred quantitative tools are already well in place so the task becomes one of finding meaningful ways to supplement them with user-centred, qualitative approaches which take account of user needs within a social context.

Two examples of mixed method evaluation may offer ideas for practical application.

Pothas et al (2001) developed a qualitative, user-centred gap analysis survey to assess customer satisfaction with their bank. They argue this method allows the issues that really matter to customers to emerge rather than obliging them to answer a set of pre-defined questions judged relevant by the investigator. The approach incorporates 'surfacing issues and monitoring impact from the viewpoint of the customer' (p93) and is based on two simple questions:

  • What comes to your mind when you think about your bank?
  • What do you envisage the ideal bank to be like?

The answers to these grounding questions are aggregated to form issue clusters that, in turn, inform the development of further qualitative questions. These questions directly target the specific issues customers raise, and are periodically re-grounded to ensure the evaluation is firmly based in what really matters to service users. This also ensures that evaluation keeps up with environmental changes and service improvements based on previous surveys. Jackson & Trochim (2002) outline methods for analysing and building on the results of such open ended surveys using concept maps - a tool often employed in focus group research.

Norlin's (2000) experience of previous mixed methodology evaluation in an academic library suggested that users had three major needs of the reference staff: 'approachability', 'ability to answer questions correctly', and skills in offering 'ideas on how to get started' (p547). These issues appear to cross paradigms: the first addressing how people feel (a subjective, user-centred perspective), the second their rational information needs (in which information is considered to be an objective product), and the third addressing their context-specific needs as active information seekers engaged in a dynamic process of making meaning. These needs were explored eclectically by survey, focus group and observation methods which resulted in several key changes to the service including roving librarians and better promotion of technology classes. Norlin also found that different student groups had different needs, and that the rating they gave to each librarian (a quantitative measure) related to many subtleties which were only noticeable via observation (one of the qualitative strategies used). Thus an enriched picture was developed which more accurately reflected user's experience of the library and therefore offered better ideas on how its quality might be improved.

Conclusion

The way that we ask questions shapes the that answers we get. All forms of research make assumptions about the world they are investigating, and all have strengths and weaknesses. Mixed method evaluation which combines user-centred with system-centred paradigms and qualitative with quantitative methods offers complementarity. It allows evaluators to design creative interventions which fit with their service aims, and which not only help us to ask the questions we find most pressing but enables respondents to explore their own. A strategy that connects with the needs, experiences and perspectives of service users is one that leads to informed professionals and more effective decision-making about quality improvements. And lastly, as library and information professionals attempting to supplement our evaluations with qualitative, user-centred approaches, surely we cannot afford to neglect the wealth of research produced by our colleagues in library and information studies.

References

ACT Customer Services and Information (1997), Australian public libraries: 1996-97 selected national indicators [Online]. Available: http://www.info.act.gov.au/functions/librarysurv9697.pdf [Accessed 26 October 2003].

Association of Research Libraries (ARL) (2003), Libqual+™: Charting library service quality [Online]. http://www.libqual.org [Accessed 15 September 2003].

Australian Library and Information Association (1990), Towards a quality service: goals, objectives and standards for public libraries in Australia, Panther, Canberra, ACT.

Australian Library and Information Association (2000), Statement on Professional Conduct [Online]. http://www.arl.org/newsltr/212/libqual.html and http://www.ifla.org/IV/ifla66/papers/028-129e.htm

Covey, DT 2002, Usage and usability assessment: library practices and concerns, Digital Library Foundation, Washington, DC.

Davis, D & A Bernstein (1997), 'From survey to service: Using patron input to improve customer satisfaction', Technical Services Quarterly, vol 14 nº3, pp47-61.

Dervin, B (1989), 'Users as research inventions: how research categories perpetuate inequities', Journal of Communication, vol 39 nº3, pp216-232.

Dervin, B (1992), 'From the mind's eye of the user: the sense-making qualitative-quantitative methodology', in Qualitative Research in Information Management, eds JD Glazier & RR Powell, Libraries Unlimited, Englewood, CO, pp61-84.

Dervin, B (1999), 'On studying information seeking methodologically: the implications of connecting metatheory to Method', Information Processing and Management, vol 35, pp727-750.

Dervin, B 2003, 'Human studies and user studies: a call for methodological inter-disciplinarity', Information Research, vol 9 nº1, http://InformationR.net/ir/9-1/paper166.htm

Dervin, B & K Clark, 1987, Asq: asking significant questions, alternative tools for information needs and accountability assessments by libraries, Peninsula Library System, Belmont, CA.

Dervin, B & P Dewdney, 1986b, 'Neutral questioning: a new approach to the reference interview', RQ, vol 25 nº4, pp506-513.

Dervin, B & M Nilan (1986a), 'Information needs and uses', Annual Review of Information Science and Technology, vol 21.

Duffy, A & A Ketchand (1998), 'Examining the role of service quality in overall service satisfaction', Journal of Managerial Issues, vol 10 nº2, pp240-256.

Durrance, JC (1991), 'Research needs in public librarianship', in Library and Information Science Research, eds C. McClure & P. Hernon, Ablex, Norwood, NJ.

Durrance, JC & KE Fisher, (2002), How libraries and librarians help: context-based, outcomes evaluation toolkit [Online]. http://www.si.umich.edu/libhelp/toolkit/index.htm [Accessed 17 September 2003].

Durrance, JC & KE Fisher (2003), 'Determining how libraries and librarians help', Library Trends, vol 51 nº4, pp541-570.

Eisenhart, M (2002), 'The paradox of peer review: admitting too much or allowing too little?' Research in Science Education, vol 32 nº2, pp241-255.

Fitzgerald, B (2003), 'Informing each other: bridging the gap between researcher and practitioners', Informing Science, vol 6 pp13-19. http://esc01.midphase.com/~inform/Articles/Vol6/v6p013-019.pdf

Greene, J (2000), 'Understanding social programs through evaluation', in Handbook of Qualitative Research, eds NK Denzin & YS Lincoln, Sage Publications, Inc., Thousand Oaks, Ca, pp981-999.

Greene, JC & VJ Caracelli (1997), 'Advances in mixed-method evaluation: the challenges and benefits of integrating diverse paradigms', New Directions for Evaluation, vol 74, Jossey-Bass Publishers, San Francisco.

Harris, RM & P Dewdney (1994), Barriers to information: how formal help systems fail battered women, Greenwood Press, Westport, Connecticut.

Henry, G (1996), 'Does the public have a role in evaluation? Surveys and democratic discourse', in Advances in survey research, eds MT Braverman & JK Slater, Jossey Bass Publishers, San Francisco. pp3-15.

Henry, GT, G Julnes & MM Mark (1998), 'Realist evaluation: an emerging theory in support of practice' New Directions for Evaluation, vol 78, Jossey-Bass Publishers., San Francisco.

Hernon, P & RE Dugan (2002), An action plan for outcomes assessment in your library, American Library Association, Chicago.

Hernon, P & JR Whitman (2001), Delivering satisfaction and service quality: a customer-based approach for libraries, American Library Association, Chicago.

Ingwersen, P (1995), 'Information and information science', in Encyclopedia of Library and Information Science, ed K Allen, vol 56, Dekker, New York, pp137-174.

International Federation of Library Associations and Institutions (IFLA) (2003a), More about IFLA [Online]. http://www.ifla.org/III/intro00.htm [Accessed 24 October 2003].

International Federation of Library Associations and Institutions (IFLA) (2003b), Information for all: the key role of libraries in the information society (Report Prepared for the World Summit on the Information Society, November 2003) [Online]. http://www.unige.ch/biblio/ses/IFLA/rol_lib_030526.pdf [Accessed 20th September 2003].

Jackson, K & W Trochim (2002), 'Concept mapping as an alternative approach for the analysis of open-ended survey responses', Organizational Research Methods, vol 5 nº4, pp307-336.

Janesick, VJ (2000), 'The choreography of qualitative research design', in Handbook of Qualitative Research, eds N. K. Denzin & YS Lincoln, Sage Publications, Inc., Thousand Oaks, Ca, pp379-399.

Julien, H (1999), 'Barriers to adolescents' information seeking for career decision making', Journal of the American Society for Information Science, vol 50 nº1, pp38-48.

Kuhlthau, CC (1993), Seeking meaning: a process approach to library and information services, Ablex Publishing, Westport, Connecticut.

Kyrillidou, M (2002), 'From input and output measures to quality and outcome measures, or, from the user in the life of the library to the library in the life of the user', The Journal of Academic Librarianship, vol 28 nº1, pp42-46.

Latu, T & A Everett (2000), Review of satisfaction research and measurement approaches, Department of Conservation, Wellington, NZ.

Limberg, L (1999), 'Experiencing information seeking and learning: a study of the interaction between the two phenomena', Information Research, vol 5 nº1, http://informationr.net/ir/5-1/paper68.html

Marshall, J (1984), 'Beyond professionalism: the library in the community', in Citizen Participation in library decision-making: The Toronto Experience, ed J Marshall, Dalhousie University/Scarecrow Press, Metuchen, NJ.

McNabb, R (1999), 'Making all the right moves: Foucault, journals and the authorization of discourse', Journal of Scholarly Publishing, vol 31, pp20-41.

Morris, RCT (1994), 'Toward a user-centered information service', Journal of the American Society for Information Science, vol 45 nº1, pp20-30.

Motylewski, K (2002), Outcomes: libraries change lives - oh yeah? Prove it [Online]. http://www.imls.gov/grants/current/PLA-02-2OBE.pps [Accessed 20 September 2003].

Norlin, E (2000), 'Reference evaluation: a three-step approach - surveys, unobtrusive observations, and focus groups', College and Research Libraries 61 [6] pp546-553.

Parasuraman, A, V Zeithaml, & L Berry (1994), 'Reassessment of expectations as a comparative standard in measuring service quality: implications of further research', Journal of Marketing, vol 58 [Jan] pp111-124.

Pawson, R & N Tilley (1997), Realistic evaluation, Sage Publications, London.

Pelz, DC (1978), 'Some expanded perspectives on use of social science in public policy', in Major social issues: a multidisciplinary view, eds JM Yinger & ST Cutler, The Free Press, New York.

Pettigrew, K (1999), 'Waiting for chiropody: contextual results from an ethnographic study of the information behaviour among attendees at community clinics', Information Processing and Management. vol 35 nº6 pp801-817.

Pettigrew, KE, JC Durrance & KT Unruh (2002), 'Facilitating community information seeking using the internet: findings from three public library-community network systems', Journal of the American Society for Information Science and Technology, vol 53 nº11, pp894-903.

Pothas, A, A De Wet & J De Wet (2001), 'Customer satisfaction: keeping tabs on the issues that matter', Total Quality Management, vol 12 nº1, pp83-94.

Poustie, K (2002), Whither Australian public libraries: ALIA Conference 2000 Proceedings [Online]. http://conferences.alia.org.au/alia2000/proceedings/kay.poustie.html [Accessed 26 October 2003].

Radford, GP & ML Radford (2001) 'Libraries, librarians, and the discourse of fear', Library Quarterly, vol 71 nº3, pp299-330.

Rodski Group (date unknown), Two core questions - importance and performance [Online]. http://www.rodski.com.au/research/company/methods.cfm [Accessed 6 November 2003].

Rudd, P (date unknown), Documenting the difference: demonstrating the value of libraries through outcome measurement, [Online]. http://www.imls.gov/pubs/pdf/pubobe.pdf [Accessed 30 September 2003].

Savolainen, R (1995), 'Everyday life information seeking: approaching information seeking in the context of 'Way of Life'', Library and Information Science Research, vol 17 nº3, pp259-294.

Sheppard, B (2002), Showing the difference we make: outcome evaluation in libraries and museums [Online]. http://www.imls.gov/grants/current/crnt_obe.htm [Accessed 19 September 2003].

White, C (2003), 'Little evidence for effectiveness of scientific peer review', British Medical Journal (BMJ), vol 326 (7383), p241.

Whitmire, E (2001), 'The relationship between undergraduates' background characteristics and college experiences and their academic library use', College and Research Libraries, vol 26 nº4, pp233-247.

Widdows, R, TA Hensler & MH Wyncott (1991), 'The focus group interview: a method for assessing users' evaluation of library service' College and Research Libraries, vol 52 nº4, pp352-359.

Witkin, BR (1994), 'Needs Assessment since 1981: the state of the practice', Evaluation Practice, vol 15 nº1, pp17-27.


Biographical information

Abby Haynes is a community development officer with the Commonwealth Government. She recently completed a post-graduate Diploma in Information Management at University of Technology, Sydney, during which time she also worked at a public library. abby.h@bigpond.net.au


About this article she says: 'You asked how I came to write the article... It was for my Information Management course at University of Technology, Sydney. We had to write a paper as if for a journal. I was doing research on another topic and visiting several academic libraries at the time, and was given customer satisfaction questionnaires to fill in. I found them clumsy and unable to capture the things I wanted to say. I was struck at how little relationship they had to the research methods we were studying on the course. During this period I was also working in a public library in which we gave out English language questionnaires to NESP majority populations. It seemed there was no real attempt to consult with most of our main users groups, yet many decisions about resource allocation were made on the basis of feedback from this survey. I could not understand why library professionals did not question these instruments more and I decided to try and unpick the puzzle. I wrote the article with the broad ALJ readership in mind, hoping to raise these issues with as many Australian practitioners as possible.'


top
ALIA logo http://alianet.alia.org.au/publishing/alj/53.3/full.text/haynes.html
© ALIA [ Feedback | site map | privacy ] ah.sc 8:54am 3 August 2010

Warning: Unknown(): open(/tmp/sess_694f3e9780c093a349c62d5fb5b5bed0, O_RDWR) failed: No space left on device (28) in Unknown on line 0

Warning: Unknown(): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0