Next Article in Journal
How Using Dedicated Software Can Improve RECIST Readings
Previous Article in Journal
The Experience of Learning in “The Cube”: Queensland University of Technology’s Giant Interactive Multimedia Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Commentary

Design, Use and Evaluation of E-Learning Platforms: Experiences and Perspectives of a Practitioner from the Developing World Studying in the Developed World

by
Nesba Yaa Anima Adzobu
Main Library, University of Cape Coast, University Post Office PMB, Cape Coast, GHA, Ghana
Submission received: 15 April 2014 / Revised: 11 July 2014 / Accepted: 14 August 2014 / Published: 2 September 2014

Abstract

:
Electronic learning platforms are evolving and their evaluation is becoming more complex and challenging with time. Yet, the evaluation of electronic learning services is intrinsically linked to improving the performance of documentation services. In this paper, I describe my perspectives on the design, use and evaluation of an electronic learning platform using a lens of a practitioner from a third world country. I further delineate the challenges and constraints I encountered as a student learning about e-learning platforms and using e-learning platform services at an institution of higher learning in Sweden. In particular, the Ping Pong system at the University of Boras, Sweden, and the electronic print in the Library and Information Science (E-LIS), one of the services from the bulletin board for libraries (BUBL) Link information gateway, will be evaluated. It is anticipated that this experiential evaluation will provide designers of e-learning platforms with insights and strategies for refining the e-learning platform to facilitate teaching activities and promote students’ learning efficiency and satisfaction.

1. Introduction

Online learning platforms have come to stay; it is anticipated that they will co-evolve with traditional learning platforms in the future. Increasingly, e-learning platforms are taking center stage in learning and research in institutions of higher learning in developing countries. In this paper, I will attempt to examine the concepts of design, use and evaluation of e-learning platforms as explicated in the extant literature. I will also attempt to describe my personal experiences and perspectives on the design, use and evaluation of e-learning platforms regarding the untapped strengths of e-learning platforms in Ghana. In addition, I will enumerate the challenges and constraints I encountered as a practitioner from the third world (Ghana), learning about e-learning platforms and using e-learning platforms services at an institution of higher learning in a first world country (Sweden).
The design of e-learning platforms is a complex initiative. The complexity in design is reflected in the number of stakeholders involved in the various stages and processes of the design. This, indeed, calls for collaboration and interactivity at different levels of the design process. While collaboration on the one hand largely involves inter-stakeholder communication, interactivity on the other hand may require communication between humans and the system, a domain known as Human–Computer Interaction (HCI) [1]. This distinction is debatable because communication runs through both collaboration and interactivity. Interactive design is the design of a product or system that is helpful in allowing people to perform their daily task or work to meet usability goals and user experience goals [2]. “Usability goals” refer to the use of interactive products that are effective, efficient, memorable, learnable and safe, and have fewer errors from the user’s perspective [2]. “User experience goals” refer to the feelings that cover all senses and are dependent on the user’s prior experiences and values. It is the assessment of a product and the demand for its follow up. It can be argued that usability goals and user experience goals determine whether the design of a particular e-learning platform can be considered as satisfactory or unsatisfactory.
The goal of this paper is to assess the strengths and weaknesses of different kinds of functions offered by an e-learning platform in a developed country from the perspective of user from a third world country. For this experiential evaluation, I selected Ping Pong as my research target, the main e-learning platform used by the University of Boras in Sweden. Few evaluations of e-learning platforms exist in the extant literature regarding the African context. However, given that e-learning platforms are many and are increasingly playing prominent roles in altering the existing ways of teaching and learning in higher educational institutions in Africa, it is imperative to evaluate them in order to ascertain which of them best suits specific academic contexts. It is expected that this experiential evaluation will provide designers of e-learning platforms with insights and strategies for refining e-learning platforms to facilitate teaching activities and promote students’ learning efficiency and satisfaction.

2. Design of E-Learning Platforms

Let me now relate the various usability goals to my experience of the Ping Pong system (an e-learning platform for distance learners) at the University of Boras, Sweden. First, I received an e-mail containing a link to the Ping Pong site and I was instructed to log on to it for all my course materials including lecture notes, film lectures, downloads of course materials, assignments, submission of study task and study questions, communication between teachers, course mates and the systems co-coordinator. Ping Pong is a learning platform. Therefore, its primary purpose was to allow me to access course information, and to interact with other course mates and teachers in order to enhance teaching and learning. In relation to Ping Pong, all course materials and literature for the various units were provided on schedule. The deadline for submitting each task was provided. Links to course video lectures and indicative readings were also provided. All the information I needed to help me accomplish my task was provided and I had access to it. This suggests the system was effective. Effectiveness is one example of usability goals. According to Sharp et al. [2] “Effectiveness refers to how good [sic] a system is at accomplishing its intended purpose”. However, the effectiveness could be much more complex than this. Hauck and Weisband [3] mention that finding ways to wade through the vast amount of information in large data stores in e-learning platforms is critical to using the system effectively. Yet, Toms [4] asserts that the fact that just because a system delivers what has been requested of it does not necessarily mean that the results meet the user’s needs and are able to satisfy the requirements of the task. Throughout the course, every task demanded that I did an extensive and painstaking search before obtaining results that met the requirements of the task. Therefore, I think my experience with the Ping Pong system confirms the findings of Toms [4].
I encountered difficulties when using Ping Pong. The challenges I encountered were partly due to my limited previous knowledge about and the use of the system (in the past, I had not practiced any extensive use of such learning platforms; therefore Ping Pong was totally new to me). It was also partly due to lapses in the architecture of the Ping Pong system. In addition, Nielsen [5] identifies individual characteristics and differences and the user’s task as two critical elements of usability. According Hauck and Weisband [3] in many cases, users who are experts with the technology may be the ones who decide if a new application or user interface is effective or not. In my case, sometimes the Ping Pong page either failed completely to load or it took a relatively long period of time to load. I do not need to be an expert in technology to know that this situation is bad. When this happened, my first reaction was to view my time code account to see how much money I was losing as a result of the delay in loading the Ping Pong page. At other times clicking on a menu led to a page where only scripts (some series of numbers and certain alphabets) appeared, a situation which indicated that the page had failed to load properly. Over the entire duration of the course, however, my common impression of Ping Pong was that it is generally effective. In Ping Pong, the course video lectures catered for high and low bandwidth users. I fall into the second category since I live in Sub-Saharan Africa, where information technology is still nascent. From the beginning of the first course, I was not able to access any of the course video lectures because the speed of the internet in my office was extremely slow. Of course, this problem was not caused by the Ping Pong system. The Ping Pong system did not allow me to download the video lectures. The message “cannot find link to cache” popped up whenever I attempted to download the video lectures, thus preventing me from accessing this necessary information. This situation reflects difficulties in the use of e-learning services. On this basis alone, Ping Pong may not be regarded as effective. The Ping Pong system design for the course video lectures should have been done in a manner that allowed students who currently live in Sub-Saharan Africa to download the course video lectures for offline viewing given that their internet speed could be extremely slow. My perspective on this matter is that I may be alone when it comes to the issue of inability to listen to video lectures. Access to relevant course information is critical to understanding the course better. Therefore, my inability to access this kind of information in one way or the other limited my capacity to deliver high quality assignments to teachers. Now, the critical questions I asked myself are: are the video lectures a question of bad designs? Would I have encountered this same problem if the video lectures were rather delivered in Audio or PDF format? Would all other students have appreciated audio or PDF lectures instead of video lectures, given the level of technological advancements today and the advantages of using video as against audio versions? Is it worth it to change the lecture format for the sake of only one student from the global south? What is the guarantee that the audio version of the lectures will even work for me? These questions are pertinent and useful for e-learning platforms designers because it will guide the design process and thus lead to better decision outcomes.
Apart from effectiveness, efficiency is another element of usability goals. According to Sharp et al. [2], efficiency refers to the help the system or product offers to its users. Basically, different people prefer or need different information architecture and different purposes require specific architecture. Given that users of e-learning platforms come from diverse academic backgrounds and social persuasions, it implies that e-learning platforms need to be structured to allow all kinds of users to obtain access to information that might be used to meet their information needs. New generation students demand faster service as most campuses today are wired, allowing students to access e-learning resources from their laptops in their dormitories or in the classroom [6]. With regards to the Ping Pong system, I was provided with a user guide to assist me to explore the system. It helped me to understand the different menu displayed on the start page in relation to where and how to navigate to the course materials, where to view new messages, and how to use the right language in reading. The user guide also assisted me in understanding how to provide feedback when necessary, and how to communicate with the unit teacher by mail to seek clarification on questions and issues I did not understand. On the basis of these issues, Ping Pong could be regarded as efficient. The link to Ping Pong in itself was very easy to remember, very short and precise (http://pingpong.hb.se). My user name and password was also simple and short. These factors allowed me to easily login to the system. There was no need to write my user name and password down in my diary and carry it along every time and everywhere. When I needed to exit the system, the log out button is also placed at the far right hand corner of the system and is duly labeled; recently visited events are displayed on the start page which served as a reminder of the events I last visited. It also served as a short cut whenever I wanted to visit that same place. After signing in with my user name and password I first searched for the log out button, I had no problem tracing it as it situated at the same extreme right as in Yahoo mail. While I searched for the log out button in Ping Pong, I tapped this previous knowledge (experience) of where the log out button is found in Yahoo mail, and this helped me to easily locate the log out button. All the items in the Ping Pong start page were clearly labeled. Hence, even though I sometimes did not use the system for a week whenever I return to it I still recalled my previous actions. This enabled me to use Ping Pong often without difficulty. However, sending mails to multiple recipients in Ping Pong was a task that I failed to accomplish. In fact, I always had to copy the emails of recipients from elsewhere and paste it in the recipient address column before sending. This is a cumbersome process in terms of the time and energy spent in sending mails in Ping Pong. The help function in the system has not been of help regarding this matter. On this basis, the efficiency of Ping Pong could be questioned. It is argued that certain features of e-learning platforms (the more technical aspects) may discourage individuals from adopting a useful system since it may prove too difficult to use [3]. However, on a balance I think Ping Pong is generally efficient. Here, it is important to point out that my evaluation of Ping Pong as a system is subjective. It is based on the issue that I consider most important when it comes to efficiency. Another student may perceive this issue differently.
Learnability is another important issue when it comes to e-learning platforms design. Learnability of Ping Pong refers to the ease with which the guide to the Ping Pong system provides step by step guide to the system within a short time frame [2]. On the start page, menus such as how to use Communication, Personal, Events, Tools, Calendar collectively made the system easy to learn. On the whole, the Ping Pong system was easy for me to learn within a short period of time. Ping Pong was safe to use in that although my user ID and my password to the system was created beforehand and given to me, I exclusively had access to the password to my page and especially access to my documents. Teachers also had access to certain parts of the system when it came to download of submitted study tasks and study questions. All my colleagues were also given different user IDs and passwords. The password and the user identity were provided as privacy right for each student. Another safe use of Ping Pong is that the buttons are few and straight forward. There was no need for me to click many buttons in search of where to log out, or where to go the previous page or the start page.
Ping Pong provided the right kind of functionality so that I could make use of the system. Under the personal button, the system provided me with personal information on where to put my personal details, upload my photograph, and setting of my preferences (choosing how I wanted Ping Pong to appear). Other functionality included language settings and short cut routes. There was no time limit and I could make changes to my personal settings in the system whenever necessary. Ping Pong supports creativity, thus I added my personal details to the system. I was also able to create a new calendar and enter a new event on it, as and when it was necessary. Therefore Ping Pong shows a high degree of utility. Use of Ping Pong was motivating, satisfying and helpful because I received lecture notes on time, and I had access to all the information I needed in order to facilitate the learning process. Ping Pong was not necessarily entertaining because there are no games in the system. This was not a limitation of the system given that the primary purpose of Ping Pong is to educate. However, the use of Ping Pong was fun because I learnt new things on every visit and added to my previous knowledge every time I used the system.
As an inexperienced and new user of Ping Pong, some of the instructions, especially on how to submit my assignment, were not initially clear to me. This led me to submit my first two assignments wrongfully by placing them in my personal documents folder instead of loading it just below the study task. At the very beginning, I did not have enough time to go through the tour guide. I used the start page and clicked every item that displays a button or symbol for viewing its content. My initial aim was just to have an overview of the system and to be able to have access to my course material, read any messages available and to be able to send a Ping Pong Instant Message (PIM). With time, I did not click all the icons on the start page as I used to do when I started using Ping Pong initially. I visited the Ping Pong system almost every day. I subsequently got to know where to locate my personal documents, how to save my documents and how to send a PIM. Eventually, I got to know how to load my study task. I navigated easily and rapidly to all content of the system than when I started. This means that my user experience of the Ping Pong system immensely improved with time and frequency of use. Ping Pong is therefore efficient, effective, learnable, memorable, and safe and exhibits a high degree of utility. This notwithstanding, I think the system is far from being perfect. From the abovementioned usability goals, and from my subjective evaluation it appears that generally Ping Pong is a system that was able to perform its intended purpose and hence achieved its specific goals.

3. Use of E-Learning Platforms

According to Tammoro [7] and Monopoli et al. [8], users are satisfied when the speed of access to digital resources is high, access to materials is easy, only simple steps are required to find information and the availability of the materials is within reach. When users are offered access to a database of internet resources which they can search by keyword or browse by subject area they can do this in the knowledge that they are looking at a quality-controlled collection of resources [9]. When accessing information, users behave differently based on the environment (physical or virtual) within which the search is being carried out [10,11,12]. A number of theories have been proposed to explain the different behaviors that information users’ exhibit in their quest to access information [13,14,15]. In this section, I will attempt to recount how I searched for information during my study- and work-related tasks. I will also attempt to relate theoretical explanations to my access of information. I observed that when I was performing study- and work-related tasks four behavioral patterns emerged. These behavioral patterns were interaction with metadata and real information, cherry picking, browsing and information encountering. These issues will be discussed in the order in which it is listed.
According to Pharo [11], searchers and users of e-learning platforms mainly interact with two kinds of information: bibliographic information (metadata) and the contents of materials (pure or real information). This statement is true of my search for information in both study- and work-related tasks. In study related tasks, I interacted with the following metadata: journal information such as title, volume, issue number and date, among others. In my study related tasks, this type of metadata serves as a kind of bridge to the information contained in each article because the title either kindles or dampens my interest. When the title aroused my interest I proceeded to read the abstract after which I downloaded the full article (real information). It is important to note that sometimes gathering metadata was not my fundamental goal or value when I was performing study-related tasks. On such occasions, metadata only served as a mechanism to reach the contents of each article. My goal or end for that reason is the information contained in each article. In performing my study-related tasks, metadata may be regarded as a means to an end. Hence, metadata is purely instrumental in this case. However, in carrying out work-related tasks, metadata is usually the goal.
One of my foremost work-related schedules at the University of Cape Coast School of Business library is cataloguing. The Library of Congress Online Catalogue is the database that I mainly use in performing work-related tasks. The basic search interface of this database provides search by the following metadata: title, author, subject, International Standard Book Number (ISBN), and Library of Congress Control Number (LCCN), among others. The specific metadata that I am interested in when I am carrying out work-related tasks is the call mark or class number. Therefore, the call number is my goal. However, title, author, subject, ISBN, and LCCN serve as the means to reach this goal that is the call number. Hence, in work-related tasks, a particular metadata serves as the goal while other metadata serve as the means to reach this goal. The steps involved in reaching my goal are as follows: typing the author’s full into the Library of Congress Online Catalogue, clicking on a selection from a host of author names, the fitting author name that corresponds to the specific book being catalogued. Next, I carry out further searches by clicking the title of the book for bibliographic details. I then copy the call number or class mark except the author cutter number into the book that is being catalogued. If the author name is not found in the database, I shift from basic search to advanced search mode. If the advanced search mode fails to produce the call number or class mark, the book is put aside and I begin a new search using a different book. This behavior of mine in the performance of work-related tasks particularly in relation to author searching reflects the classic model of information retrieval as mentioned in Bates [16].
Cherry picking is also a behavior that characterizes some stages of my study- and work-related tasks. In my work-related task, I search for information for students. One particular example involved searching for information on the differences between a cell phone and personal computer, in a Management Information Systems course. In this case, I used Google search engine to gather general information from different sources without sorting it out or filtering it. All the information I gathered on the subject was transferred directly to a storage device from which the students had to sort out in a bid to meet their information need. During the search process, I realized that the search started on a broader note and moved through a series of sources from which I gathered bits and pieces of information. Along the line, querying kept changing in the search process. This kind of development in the search process reflects cherry picking [13,16]. Cherry picking is also reflected in my study-related search tasks. However, in this case I do the sorting and filtering of the information gathered, personally. According Bates [13] the system does not deliver a complete, single and final retrieved set. This makes it imperative to sort and filter the bits of information to meet my information need. Usually, I search for information outside the literature list provided in the course materials. During such search processes cherry picking is also evident particularly in the search of journal articles.
Another information searching behavior that appears in my study- and work-related tasks is browsing. As Bates [13] notes “browsing involves a series of glimpses, some glimpses leading to further, closer examination of things glimpsed and some not”. This is a further elaboration on the model of browsing proposed by Rice et al. [17] which essentially considers it as a scanning activity or process. Interestingly, traces of the conceptions of browsing as proposed by Bates [13] and Rice et al. [17] became evident while I was performing my study- and work-related tasks. However, I realized that I predominantly used browsing as proposed by Rice et al. [17], at the beginning of the search process. The Bates [13] model of browsing became my predominant behavior as the search process progressed. I realized that the duration and speed of my browsing hinged on whether the information in the literature was relevant or not. Relevant information engaged my attention while I scanned through irrelevant information rapidly. Cothey [18] mentions that browsing strategies are iterative and are contingent on making out relevant information.
Information encountering is yet another behavior that was reflected in my work- and study-related tasks. According to Erdelez [10], this phenomenon appears to occur with “the unexpected discovery of useful and interesting information”. Erdelez [10] argues that information encountering is much more than merely ‘bumping into information’. Erdelez [10] further argues that even before the phenomenon occurs, the information searcher is positioned in one way or the other to receive this new information. In my case, I was consciously looking for information in relation to my study- and work-related tasks. While the search process was on-going, I saw information that might be interesting to my co-workers, friends and students. This suggests that I was in a sort of ready mode to receive information. It is significant to note that prior to my search, these individuals had already given indication of their information needs to me. In such situations, I proceeded to download the information for them.
Users will always be confronted by complexities and challenges when using e-learning platforms. The levels of complexities and challenges that the use of e-learning platforms brings will even be higher for students like me who come from the third world, where limited knowledge about and lack of access to new technology prevails. Speaking from a third world perspective, I think use of e-learning platforms in our part of the world is negligible, at best it is unsatisfactory. I am not sure whether most library users in the University of Cape Coast (both staff and students) are even aware of the full range of the prospects of e-learning platforms. On the awareness-use continuum, I can speculatively project that level of awareness and extent of use of e-learning platforms on campus among staff and students is low. In fact, until recently, the library had no staff with a postgraduate degree specifically in e-learning platforms. With the scant allocation of resources to fund research and establishment of e-learning platforms, limited technical expertise in the field, and over concentration on physical libraries in the university, I can definitely perceive the widening of the digital divide between the global North and South.

4. Evaluation and Context of E-Learning Platforms

Evaluation of all systems, not least an information system, is central to its sustainability. Evaluation is basically linked to information retrieval metrics such as precision, recall and fallout, among others. No doubt, it would be difficult for users to decide beforehand exactly what features they do and do not want, and to some extent the development of electronic devices is governed by an attempt on the part of the designers to “try it and see if there’s a use for it.” I have selected a digital service, namely electronic print in Library and Information Science (E-LIS), one of the services from bulletin board for libraries (BUBL) Link information gateway, as the object to evaluate. My motivation for choosing E-LIS is that I use this service regularly. I will consider the possibilities of evaluating E-LIS from a user’s perspective. The evaluation of E-LIS will be based on goals, user groups (types of user communities), use of evaluation results and the instruments (methods) that would best suit the different goals and evaluation tasks.
Evaluation of services of e-learning platforms may take different forms or approaches [19,20,21,22,23]. This suggests that evaluating e-learning platforms is also a complex and difficult undertaking. However, according Saracevic [22] systems approach is the predominant or widely used mechanism when it comes to evaluation of all information systems including e-learning platforms. A systems approach implies that e-learning platform services may be regarded as a system of interacting parts that function in concert to achieve specific targets. An evaluation can therefore be carried out on parts of or on the entire system to determine its effectiveness or efficiency or both [22]. Evaluation of the whole system could pose significant challenges to the evaluator compared with one that focuses on one or two parts of the system. Central to critical evaluation is the expertise needed to assess the system. The evaluator should usually have indices or values against which the system would be assessed. These could be objective or subjective.
Effectiveness and efficiency relate to performance. Therefore evaluation is primarily concerned with the performance of systems. Bollen and Luce [19], note that “the evaluation of services provided by e-learning platforms and collections is a multi-faceted problem that cuts across a wide range of systems, interfaces, and user communities as well as a multitude of issues in Human–Computer Interaction”. They further contend that “any evaluation of Digital Library collections and services must inevitably take into account the characteristics of the Digital Library’s user community”. Marchionini, and Plaisant and Komlodi [24] share this contention. In this connection, Saracevic [22] identifies a number of evaluation types. One type of evaluation deals with user studies involving different user communities (students, teachers, researchers). The center of attention for this kind of evaluation is different design features in terms of usability and functionality with a view to improving design for the different user communities. Again, user logs have been used to establish user interactions through the interface with a view to developing better usability and functionality. Another type of evaluation has sought to use the instrument of interviews to understand different users, their socio-cultural settings, relative interests, their capacities to act (agency), their opportunities, their constraints and their goals. In effect, this kind of evaluation explores how individuals use of tools and technologies within specific socio-contextual settings. Yet another type of evaluation explores how different users identify, retrieve, read and use materials in articles of interests.
From the above, it appears that each approach to evaluation serves a different a purpose. Each approach has merits and demerits. Therefore no single approach is superior to the others. In fact, it is the goal of the evaluation that determines whether the approach is appropriate or not. Saracevic [22] further identifies a number of approaches to evaluation. These include ethnographic which is well suited to the attainment of a general understanding of the function and outcome of a practice or a construct in a wider collective or group framework. Sociological approach is appropriate for shedding light on social forces and effects. Economic approach is apt when it comes to accounting for economic factors such as investment cost, return on investment and payback time. It may be considered as a kind of cost-benefit analysis to determine whether continued financial resource allocation to a project on e-learning platforms is justified or not. A situation in which economic analysis was instrumental in financial decision-making is reflected in Choudhury, Hobbs and Lorie [25]. A political approach concentrates on policy and political factors. For instance, what kind of institutional structures in terms of policy and the legislative framework must be put in place to facilitate the efficient running of an e-learning platform’s service? These issues and many more shape the evaluation of e-learning platforms.

5. Description of the E-LIS

E-LIS is an archive for materials in library and information sciences, which was formed in 2003. It is the first international e-server in this subject area. E-LIS relies on the voluntary work of individuals from diverse backgrounds and is non-commercial. The purpose of the E-LIS library archive is to make full text LIS documents visible, accessible, harvestable, searchable and usable by any potential user with access to the internet. The materials available in the archive include books, book chapters, journal articles, conference proceedings, conference posters, conference papers, thesis, working papers, newspapers, magazines, bibliographies, manuals, tutorials and instructional materials, among others. The wide array of materials available in the archive reflects the scope of the archive in meeting the needs of various users. Metadata for most of the materials is available. These include abstract, additional information, alternative locations, author names, conference dates and locations, country, department and Editors. The contents of the archive are accessible by search (quick, simple, advanced) and browse (by year, subject, authors/editors, books/journals, country). In terms of browsing by subject, E-LIS uses the JITA classification system. JITA is an acronym for the first names of Jose Manuel Barrueco Cruz, Imma Subirats Coll, Thomas Krichel and Antonella De Robbio. JITA was developed after a merger of the News Agent Topic Classification Scheme (maintained by Mike Keen at Aberystwyth, UK, until 31 March 1998) and the RIS classification scheme of the (now defunct) Review of Information Science, originally conceived by Donald Soergel (University of Maryland). Searching and archiving in E-LIS are free for any user.

5.1. Evaluation of the E-LIS

5.1.1. Evaluation Goals of the E-LIS

Basically, evaluation goals of E-LIS can focus on either the system or the user. In considering goals of E-LIS as a system, the processing, engineering and content of E-LIS are the focal points. Here, by examining feedback from users of E-LIS through for example log analysis, the system designers can develop new technologies that support a range of search strategies from hierarchical selections to formal and comprehensive queries so that the needs of beginners and experts are both met. This is geared towards improved performance of the system. Evaluation goals of E-LIS may include better learning for all user groups. It may take account of improved research for specific user groups. Improved dissemination and communication between user and system may also feature as a possible evaluation goal for E-LIS. This particular goal suggests collaboration. E-LIS can also be evaluated based on usage patterns-time of visit or use of the service, preferences of users, duration of search, and the response time for feedback. Bollen and Luce [19] outline a methodology for generation of such usage patterns using electronic learning server logs. This approach is useful when it comes to improvements of e-learning platform collection organization.
E-LIS can also be evaluated based on economic goals such as cost-benefit analysis. Cost-benefit analysis takes into account the qualitative value of e-learning platform collections and services to users. Even if digital libraries had a clear definition of what it means to be cost-effective or a benchmark against which to measure their cost-effectiveness, additional work is required to determine whether the benefits of an activity warrant the costs. If the cost of an activity is high and the payback is low, the activity may be revised or abandoned. In the same vein, cost-benefit analysis can be carried out on E-LIS to justify its continued existence. In considering goals of E-LIS from the perspective of the user, it will be appropriate to capture the user from different social levels-users as individuals, as institutions, and as society or communities. This introduces an element of complexity or diversity into the evaluation. Users of E-LIS include content providers, students, teachers and researchers. Within each category of user group and across the groups there are variations in their specific needs and the technological settings within which they work. For instance Marchionini, Plaisant and Komlodi [24] argue that users exhibit a wide array of individual characteristics, preferences, and experiences. An undergraduate student may not have the same preferences or experiences as a postgraduate student even though both are students. Researchers may not necessarily have identical backgrounds (historians, chemists, sociologists, anthropologists, authors, etc.). E-LIS users may also show this kind of diversity therefore capturing such user taxonomies in evaluation of E-LIS is crucial to guiding E-LIS interface design.
It is possible to construct user taxonomies based on motivation, domain knowledge, E-LIS system knowledge, focus, and time allocations as was carried out by Marchionini, Plaisant and Komlodi [24]. The usefulness of such taxonomies in evaluation resides in the continued development of e-learning platform interfaces. It may also be useful when it comes to meeting specific needs (queries) and providing contextual information (scope of need). The methods used to construct the taxonomies may include distribution of questionnaire to user communities in E-LIS and examination of the documents in E-LIS with a view to unearthing interface challenges in terms of content and users and strategies. Eventually the results regardless of the instrument used can be employed to improve user satisfaction with the existing E-LIS service.

5.1.2. The Use of Results and Methods of Evaluation That Can Be Used for Specific Goals

A number of methods or instruments for evaluating library services have been identified in the literature. Generally these instruments are either qualitative or quantitative. They include surveys (questionnaires), focus groups, user protocols and transaction log analysis, among others. Surveys are usually quantitative and but can sometimes be qualitative. It is very useful in gathering information about E-LIS users’ previous or current behaviors, attitudes, beliefs and feelings. The results of the survey may be used by the E-LIS editorial board to drive different directions of strategic planning. For E-LIS system designers, the results of surveys may assist them to improve service quality in terms of reducing response time. It may also help them in setting priorities; inform customization and E-LIS website vocabulary revision. Focus groups discussions are exploratory, guided interactions among seven to ten participants with common interests. The common interest in this case refers to use of the E-LIS library service. Insights from focus group discussions can inform resource allocation and strategic planning. For instance, the administrator of E-LIS can use the results to identify user problems and preferences related to E-LIS collections format and thereby increase user satisfaction. User protocols may be employed to gather in-depth insight into the behavior and experience of a person using the E-LIS tools. This is instrumental in the identification of problems in the design, functionality, navigation and vocabulary of the E-LIS website. The results can assist E-LIS system designers in rearranging the ELIS hierarchy, changing the order and presentation of search results in E-LIS and revising the metadata classification scheme for text collections in E-LIS.
A Transaction Log Analysis can basically be used to evaluate E-LIS system performance. It may be employed to study unobtrusively interactions between users and the E-LIS website. It may also be employed to track patterns of use by different user communities and the distribution of use across communities. The results can be used to construct usage patterns over time, understand user needs and inform interface redesign.

6. Conclusions

The design, use and evaluation of e-learning platforms reveal complexities and challenges. The interaction of users with e-learning platforms shows that the process is not smooth or unproblematic. This is particularly so for individuals whose backgrounds (both professional and social) reflect limited previous use and understanding of such services. In fact, the difficulties manifest at different levels of the design, use and evaluation process. It is unlikely that the challenges associated with using e-learning platforms will disappear soon. Since the design and use of e-learning platforms is an iterative process, it implies that the process is evolving, and every new day brings its own difficulties that need to be overcome. The difficulties may arise from different aspects (interface, architecture, navigation, etc.) of the design. This calls for collaboration and inclusion of diverse stakeholders in a bid to finding lasting solutions to such complexities and difficulties associated with the design, use and evaluation of e-learning platforms. The design, use and evaluation of e-learning platforms is a vast field. On their own, each of them is an extensive field. Therefore, this paper cannot claim to have comprehensively dealt with all the questions surrounding the three fields in a satisfactory manner. Not only do I lack the skills, experience and expertise to undertake such a huge and almost impossible task, but also space constraints will not permit it. The aim is to give a brief and broad overview of the main issues. In terms of the scope of the topic dealing with the design, use and evaluation of e-learning platforms, the commentary in this paper is just the tip of the iceberg.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Dix, A. Human-Computer Interaction; Springer: New York, NY, USA, 2009; pp. 1327–1331. [Google Scholar]
  2. Sharp, H.; Rogers, Y.; Preece, J. Interaction Design: Beyond Human-Computer Interaction, 2nd ed.; John Wiley and Sons: New York, NY, USA, 2011; Chapters 1 and 4. [Google Scholar]
  3. Hauck, R.V.; Weisband, S. When a better interface and easy navigation aren’t enough: Examining the information architecture in a law enforcement agency. J. Am. Soc. Inf. Sci. Technol. 2002, 53, 846–854. [Google Scholar]
  4. Toms, E. Information interaction: Providing a framework for information architecture. J. Am. Soc. Inf. Sci. Technol. 2002, 53, 855–862. [Google Scholar]
  5. Nielsen, J. Usability Engineering; Morgan Kaufmann: San Francisco, CA, USA, 1993. [Google Scholar]
  6. Lukasiewicz, A. Exploring the roles of academic libraries. Libr. Rev. 2007, 56, 821–827. [Google Scholar] [CrossRef]
  7. Tammaro, A.M. User perceptions of digital libraries. Perform. Meas. Metr. 2008, 9, 130–137. [Google Scholar] [CrossRef]
  8. Monopoli, M.; Nicholas, D.; Panagiotis, G.; Korfiati, M. A user-oriented evaluation of digital libraries. Aslib Proc. 2002, 54, 103–117. [Google Scholar] [CrossRef]
  9. Monopoli, M.; Nicholas, D. A user-centered approach to the evaluation of Subject Based Information Gateways: Case study ADAM. Aslib Proc. 2001, 53, 39–52. [Google Scholar] [CrossRef]
  10. Erdelez, S. Information encountering: It’s more than just bumping into information. Bull. Am. Soc. Inf. Sci. Technol. 1999, 25, 26–29. [Google Scholar] [CrossRef]
  11. Pharo, N. A New Model of Information Behaviour based on Search Situation Transition Schema. Inf. Res. 2004, 10. Available online: http://InformationR.net/ir/10-1/paper203.html (accessed on 28 April 2014).
  12. Smith, A.G. Search features of digital libraries. Inf. Res. 2000, 5. Available online: http://informationr.net/ir/5-3/paper73.html (accessed on 25 March 2014).
  13. Bates, M. What is browsing—really? A model drawing from behavioural science research. Inf. Res. 2007, 12. Available online: http://informationr.net/ir/12-4/paper330.html (accessed on 10 January 2014).
  14. GodBold, N. Beyond information seeking: Towards a general model of information behaviour. Inf. Res. 2006, 11. Available online: http://InformationR.net/ir/11-4/paper269.html (accessed on 10th January 2014).
  15. Wilson, T.D. Models in information behaviour research. J. Doc. 1999, 55, pp. 249–270. Available online: http://InformationR.net/ir/9-1/paper164.html (accessed on 25 January 2014).
  16. Bates, M.J. The design of browsing and berry picking techniques for online search interface. Online Inf. Rev. 1989, 13, 407–424. [Google Scholar] [CrossRef]
  17. Rice, R.E.; McCreadie, M.; Chang, S.-J.L. Accessing and Browsing Information and Communication; MIT Press: Cambridge, MA, USA, 2001; Chapters 3 and 9. [Google Scholar]
  18. Cothey, V. A longitudinal Study of World Wide Web Users’ Information Searching Behaviour. J. Am. Soc. Inf. Sci. Technol. 2002, 53, 67–78. [Google Scholar] [CrossRef]
  19. Bollen, J.; Luce, R. Evaluation of digital library impact and user communities by analysis of usage patterns. D-Lib Mag. 2002, 8, 1–13. [Google Scholar]
  20. Covey, D.T. Usage and Usability Assessment: Practices and Concerns; Digital Library Federation Council on Library and Information Resources: Washington, DC, USA, 2002. [Google Scholar]
  21. Marchionini, G. Evaluating Digital Libraries: A longitudinal and Multifaceted View, Draft Version ed; Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign: Urbana/Champaign, IL, USA, 2000. [Google Scholar]
  22. Saracevic, T. Digital library evaluation: Toward an evolution of concepts. Libr. Trends 2000, 49, 350–369. [Google Scholar]
  23. Salampasis, M.; Diamantaras, M. Experimental User-Centered Evaluation of an Open Hypermedia System and Web Information Seeking Environments. J. Digit. Inf. 2002, 2. or alternatively: http://www.webcitation.org/5bn17aN1B.
  24. Marchionini, G.; Plaisant, C.; Komlodi, A. The people in digital libraries: Multifaceted approaches to assessing needs and impact. In Digital Library Use: Social Practice in Design and Evaluation; Bishop, A.P., van House, N.A., Buttenfield, B.P., Eds.; MIT Press: Cambridge, MA, USA, 2003; Chapter 6. [Google Scholar]
  25. Choudhury, S.; Hobbs, B.; Lorie, M.; Flores, N. A framework for evaluating digital library services. D-Lib Mag. 2002, 8, 1082–9873. [Google Scholar]

Share and Cite

MDPI and ACS Style

Adzobu, N.Y.A. Design, Use and Evaluation of E-Learning Platforms: Experiences and Perspectives of a Practitioner from the Developing World Studying in the Developed World. Informatics 2014, 1, 147-159. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics1020147

AMA Style

Adzobu NYA. Design, Use and Evaluation of E-Learning Platforms: Experiences and Perspectives of a Practitioner from the Developing World Studying in the Developed World. Informatics. 2014; 1(2):147-159. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics1020147

Chicago/Turabian Style

Adzobu, Nesba Yaa Anima. 2014. "Design, Use and Evaluation of E-Learning Platforms: Experiences and Perspectives of a Practitioner from the Developing World Studying in the Developed World" Informatics 1, no. 2: 147-159. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics1020147

Article Metrics

Back to TopTop