Next Article in Journal
Smart Energy Transition: An Evaluation of Cities in South Korea
Next Article in Special Issue
Serious Games, Mental Images, and Participatory Mapping: Reflections on a Set of Enabling Tools for Capacity Building
Previous Article in Journal
Gamification as a Supportive Tool for School Children with Dyslexia
Previous Article in Special Issue
Usability and Engagement Study for a Serious Virtual Reality Game of Lunar Exploration Missions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Guide for Game-Design-Based Gamification

Cátedra Santander-UA de Transformación Digital, Universidad de Alicante, 03690 San Vicente del Raspeig, Spain
*
Author to whom correspondence should be addressed.
Submission received: 8 July 2019 / Revised: 21 October 2019 / Accepted: 24 October 2019 / Published: 5 November 2019

Abstract

:
Many researchers consider Gamification as a powerful way to improve education. Many studies show improvements with respect to traditional methodologies. Several educational strategies have also been combined with Gamification with interesting results. Interest is growing and evidence suggest Gamification has a promising future. However, there is a barrier preventing many researchers from properly understanding Gamification principles. Gamification focuses of engaging trainees in learning with same intensity that games engage players on playing. But only some very well designed games achieve this level of engagement. Designing truly entertaining games is a difficult task with a great artistic component. Although some studies have tried to clarify how Game Design produces fun, there is no scientific consensus. Well established knowledge on Game Design resides in sets of rules of thumb and good practices, based on empirical experience. Game industry professionals acquire this experience through practice. Most educators and researchers often overlook the need for such experience to successfully design Gamification. And so, many research papers focus on single game-elements like points, present non-gaming activities like questionnaires, design non-engaging activities or fail to comprehend the underlying principles on why their designs do not yield expected results. This work presents a rubric for educators and researchers to start working in Gamification without previous experience in Game Design. This rubric decomposes the continuous space of Game Design into a set of ten discrete characteristics. It is aimed at diminishing the entry barrier and helping to acquire initial experience with Game Design fundamentals. The main proposed uses are twofold: to analyse existing games or gamified activities gaining a better understanding of their strengths and weaknesses and to help in the design or improvement of activities. Focus is on Game Design characteristics rather than game elements, similarly to professional game designers. The goal is to help gaining experience towards designing successful Gamification environments. Presented rubric is based on our previous design experience, compared and contrasted with literature, and empirically tested with some example games and gamified activities.

1. Introduction

In recent years, Gamification [1,2] is getting considered a magic solution for most educational problems. Many researchers and practitioners chase it, and many studies try to unveil its secrets and details. In one form or another, the term and the field are acknowledging the power of games to engage and induce states of flow in players. Gamification chases this power to apply it to environments that originally are not ludic. The aim is to get people engaged in serious or important work with the same intrinsic motivation than in games.
This enterprise is noble but extremely complicated. As more and more research is being carried out, results remain unclear [3,4,5,6,7,8]. Hundreds of research experiences have been undertaken with mixed results. Many studies find benefits when applying Gamification, but many others do not and even some of them report damage. Overall tendency seems to report some small but measurable benefits. These results are quite unexpected compared to the exponential rise in game sales and gaming culture in general.
The problem with most Gamification research seems to be in its different focus from actual Game Design. Many studies pursue scientific isolation of statistical variables. This leads them to consider the isolated influence of individual game elements like points, badges and leaderboards in motivation and behaviour change. The problem with this approach is that a game is not an unrelated set of game elements. Metaphorically, a game is similar to a grand-cuisine dish: testing its isolated ingredients in other contexts does not convey useful information to learn to cook the dish.
This view is supported by relevant Gamification practitioners like Kevin Werbach, Yu-kai Chou or Sebastian Deterding [9,10,11] and also Game Design experts like Raph Koster or Jesse Schell [12,13]. In Werbach’s words [9]: “Clearly not everything that includes a game element constitutes gamification. Examinations in schools, for example, give out points and are non-game contexts.” Deterding goes beyond that in Reference [11]: “The main task of rethinking Gamification is to rescue it from the gamifiers.” For Deterding, the majority of gamifiers are confused as they simple try to add points, badges and leaderboards to everything, with great disregard to the complexities of Game Design.
Games are complex environments that deliver experiences to players [13]. They are made of game elements, similar to a dish is made of ingredients but the process, interactions, uses and objectives are key for the final result:
Gamification should be understood as a process. Specifically, it is the process of making activities more game-like. Conceiving of Gamification as a process creates a better fit between academic and practitioner perspectives. Even more important, it focuses attention on the creation of game-like experiences, pushing against shallow approaches that can easily become manipulative. A final benefit of this approach is that it connects Gamification to persuasive design.
Kevin Werbach [9]
These reasons could explain why there is no scientific consensus on a formal approach to Gamification. There are analyses of the characteristics of good games [14,15] which Gamification pursues. There also are methodological approaches, design frameworks and even descriptions of design patterns based on Game Design principles, good practices and experience [3,16,17,18,19,20,21]. However, all approaches rely on subjective interpretation and creative design. In fact, many professional Game Designers and researchers express their view that games cannot be formally specified at all [22,23],
Even if games cannot be formally specified and individual game element research does not yield complete information, there are useful approaches [16,24]. Assuming that Game and Gamification Design are artistic in essence, approaches the focus on acquiring design experience. There is no need to solve “what-is-a-game” philosophical debate. Game-like designs able to become engaging voluntary experiences for players could be successful. Willingness can make experiences fall in persuasive or seductive sides of Tromp, Hekkert and Verbeek’s matrix in which design can influence behaviour [25]. Similar to References [9,11,12,13], this work focuses on this practical approach.
The main goal of this work is to help acquire Game Design experience for Gamification. Experienced practitioners may found methods, frameworks or models cited in the literature more suitable to their needs, particularly References [3,17,18,20,26]. These works have great value but require previous Game-Design-Based Gamification experience to be fully comprehended and put into practice. To build such required previous experience a practical and simple approach is proposed: a measurement tool, a rubric, with great focus on Game Design aspects rather than on game elements.

Acquiring Design Experience

As previous design experience seems key [9,12,13], our proposal for new practitioners is creating and testing their own designs. In our experience, iterating over own designs leads to obtaining solid game-design-based Gamification design skills. However, analysing and improving designs results in an almost impossible task for inexperienced designers. On the absence of personal experience to rely on, the only valid source is testing. Testing with trainees is essential but doing so with no previous design guidance could result on a extremely slow and frustrating discovery process. This is an entry barrier that can produce two important problems: too many failures on initial attempts, and abandon due to frustration. Moreover, when initial failures are not identified as a consequence of lack of experience, they can result in research papers blaming the field itself.
During fifteen years teaching Game Development and Gamification [27,28], we have perceived a great difficulty to pass on design experience to new practitioners. The problem, as discussed, seems to be on the artistic nature of Game Design. Novice practitioners often underestimate the complexity of creating a design that can be put in practice, not to say a successful one. This is problematic, as their initial experiences will probably fail and be frustrating. There are design frameworks, methods and guidelines proposed for game and Gamification [3,16,18,19,20,21,29] that could help in creating better first designs. However, these proposals are either general or specifically for experts. They are not designed with novices in mind and can easily result overwhelming for them. For instance, Kreimeier’s patterns [16] condense many designers’ experiences. This is highly valuable but almost impossible to properly understand without previous experience on pitfalls and failures. Tondello et al. [20] explicitly state “Our set of heuristics is aimed at enabling experts to identify gaps in a gameful system’s design” which clearly leaves novices out. Linehan et al. [3] propose to use Applied Behaviour Analysis from the field of psychology with many interesting theoretical explanations. This is too much theoretical information for novices which probably will require several testing iterations to relate it to actual practice. Similarly, Self-Determination Theory (SDT) [29] is the most widely cited theoretical framework. In essence, SDT is easy to understand but too generic. Novices need more specific and game related descriptions, as SDT is purely psychological. Hunicke et al. proposal [18] splits Game Design into three blocks: Mechanics, Dynamics and Aesthetics (MDA). This simple classification helps organizing designs, which is very useful for novices but does not help in measuring their value, comparing with others or giving hints on how to improve them.
To help in this process, this work proposes a game-design-based rubric. This rubric focuses on measuring how well designed a game or activity is, from a game-design-based Gamification perspective. The measure is formalized as a score from 0 to 20 points; the greater the score, the better. The rubric is based on a set of ten characteristics related to successful designs. These ten characteristics have been selected from our previous experience in Game Design and Gamification, partially in accordance to previously discussed works and with an aim to simplify analysis. The goal is being useful enough to serve as analysis and design tool, at the same time as being simple enough to help novices.
It is important to remember that there is no known way to perform an objective assessment of a given design. The assessment obtained with the proposed rubric is to be considered a simplified initial guidance. This guidance is targeted at inexperience designers to help them overcome the entry barrier. In this sense, the rubric helps discretizing designs and moving them from the artistic to the analytic dimension. It also helps identifying potential areas for improvement, pointing to those underperforming characteristics of a given design. These values are complementary to previously analyzed formal tools and frameworks, which makes it interesting to be combined with them.
Section 2 describes the ten selected characteristics for the rubric in detail, explaining their design implications. Section 3 presents the rubric and explains its design constraints and criteria. Section 4 shows some initial evidence on the validity of the rubric by applying it to four activity samples: a commercial game, a gamified course from literature, a learning activity and a gamified version of the learning activity. Finally, Section 5 sums up conclusions and limitations of this work.

2. Ten Relevant Characteristics for Game-Design-Based Gamification

This section describes in detail the ten selected characteristics in the proposed rubric. The importance of each one is discussed by highlighting relevant psychological arguments considered and comparing classical educative environments with successful commercial games. Moreover, considerations from prestigious game designers are also cited and analysed from their published works.
This set of characteristics greatly overlaps those described by previous works [3,16,17,19,20,21], specially Gee’s learning principles good games incorporate [14,15]. Most works are generalist and some are specifically focused on experts. That could be overwhelming for novices. Our proposed set aims to fill in this gap. Its intended value comes from its use case goal: simple and easy to understand for new practitioners.
We acknowledge that this set is subjective in nature, despite the arguments presented and discussed. We propose them out of previous Game Design and Gamification experience and we expect following works to help refining it after gathering appropriate usage evidence. Section 4 gives an initial piece of evidence on the validity of this set for guidance and ground basis to start retrieving more evidence.
Description of the ten characteristics follows in no particular order.

2.1. Open Decision Space

Autonomy is one of the center points of intrinsic motivation. In order for a trainee to be truly autonomous, there must exist different possible decisions to take. In fact, the greater the space of possible decisions over time, the better. However, there are some common misconceptions whose analysis is relevant.
  • Correct decisions ill-form decision spaces.
    Many Gamification designs reside on questions or alternative paths, being only one of them correct. This represents and ill-formed decision space because there is no true decision to take. Trainees are not being asked to decide and progress but are being tested instead. For an environment to foster autonomy and provide a truly open decision space, decisions should not be designed as correct/incorrect. In contrast, decisions must produce consequences and trainees should be free to play with situations, environments and consequences, experimenting and learning from results.
  • Discrete custom-designed decision spaces challenge autonomy.
    It is common to manually design all possible decision choices. It seems natural to attempt to directly transmit knowledge to trainees. We teachers tend to transform our knowledge into possible situations, producing some form of decision-tree. Decision space is reduced to pre-designed knowledge, what challenges autonomy. Trainees usually imagine decisions they would take but have to accommodate to designed choices. Creativity is prevented, curiosity diminishes and frustration raises. Open decision spaces that let trainees experiment with their ideas tend to be continuous, not pre-designed, more similar to simulations than to decision-trees.
  • Failing to consider movements and interactions as decisions.
    The term ’decision’ is naturally related to high-level abstract thinking and neo-cortex processing. But any action a trainee performs on any instant is a decision. Many designs do not consider them as part of the decision space. This produces designs where either trainees cannot move or their movements are meaningless. As interacting with the world is one of our richest sources of information, failing to consider it greatly limits decision spaces.
These points are clearly addressed by great games. Decision spaces are usually continuous, as players can move freely over time and experiment the consequences of their interaction decisions. Take for instance Super Mario Bros for Nintendo Entertainment System (NES) (see Figure 1) [30]. When facing the first enemy there are virtually infinite ways to do it. We often simplify it by thinking that you can jump on it, jump over it or collide with it and die. However, there are virtually infinite alternatives: jumping earlier or later, higher or lower, faster or slower and so forth. Players could even jump several times back and forth, advance and retreat, or do anything they can imagine based on the free will given by the rules of the world. In fact, Demain et al. prove that Super Mario Bros is PSPACE-complete [30], the harder class of problems that can be solved in polynomial time. Of course, this great complexity comes exactly from the openness of its decision space. Generally speaking, broader decision spaces that are not ill-formed produce more complex problems. And the greater the complexity, the more the options for creative behaviour, which fosters player autonomy.

2.2. Challenge

Designing challenging activities is a key point in Gamification and a difficult task to accomplish. An activity is considered challenging when it tests the limits of our ability in subtle ways. Oversimplifying, a design space can be considered with only two dimensions: difficulty of the task and ability of the trainee. When both difficulty and ability match, trainees are faced with activities that they are able to solve [31]. However, when difficulty is much higher than ability, trainees usually get frustrated. On the contrary, if ability is much higher than difficulty, then trainees probably become bored. There is a narrow space in between both extremes where difficulty and abilities are evenly matched. This simple analysis on activity design-space for challenge is the basis for the theory of the channel of flow [32] (see Figure 2).
In essence, challenging trainees consists in assigning then interesting tasks that lie on the verge of their abilities. Although simple in concept, challenging trainees in an educational environment is difficult. A trainee can fail in a challenging task several times while learning. Educational environments tend to punish failure by diminishing trainees marks. This is contrary to using challenge as a driver for learning and motivation. In the presence of punishments, trainees avoid difficulty even at the cost of boredom and diminished learning. Marks are the most important outcome pursued by trainees and that must be taken into account.
The flow channel shown in Figure 2 (left) represents a dynamic space. Trainees’ abilities evolve over time. As abilities increase, previous challenges become boring and new ones are required. Tasks have to be designed with incremental difficulty in mind. Figure 2 (center) represents the general concept of incremental difficult with an ideal linear progression. However, this progression should not be considered ideal. As we humans are not machines, our brains usually distaste flat linear progressions. Hollywood movie makers usually design movies following a sinusoidal pattern of fast action events followed by relaxed moments. Figure 2 (right) shows this same concept applied to incremental difficulty. This approximation generates stress spikes followed by more relaxed moments. Stress spikes on the verge of the flow channel force trainees to push their limits, adapt and learn. Easier activities let trainees reinforce their sense of progress at the same time they release previous stress and prepare for next spike. Moreover, relaxed moments represent also a psychological reward, as trainees subconsciously acknowledge their new abilities. This pattern is completely similar to General Adaptation Syndrome described by Hans Selye [33] and summarized in Figure 3.

2.3. Learning by Trial and Error

This is arguably one of the key differences between great Computer Games and traditional teaching environments. As discussed before, great computer games appropriately challenge players. Proper challenges put players into an edge where they can narrowly succeed or fail depending on their abilities [15,26]. This produces engagement on the most relevant thing a game conveys: learning. Players fail many times and try again, learning from their failures. They continue trying as long as they feel competent to learn and eventually succeed. This guiding force comes from human natural desire to learn and so it reinforces autonomy and will. This cycle happens so naturally because great computer games are safe environments for failure. Players do not want to fail but they are not afraid to do so either. They assume failure as part of the learning process and then try different things to improve their abilities.
In contrast, traditional learning environments are designed to prevent, prosecute and punish failure. This situation often arises from assessing learning on the basis of task results. Two trainees solving a given task can obtain different marks depending on their failures during respective solving processes. This situation drives trainees to focus on preventing failure at all costs to preserve their marks. All learning through challenge and experimentation gets removed from the environment.
In order to mimic computer games and get the benefits from challenge and experimentation, trial and error must be considered as a center way to learn. A proper Gamification design should focus trainees on goals and let them freely experiment and fail without punishment. Trainees need to acknowledge that failing is safe to be confident enough as to experiment. In fact, it would even be better if the environment encourages them to fail and analyse: learning from failure is extremely valuable and often forgotten due to too much focus on task results.
Moreover, solving problems by trial and error, creating solutions, failing, redoing and refining produces “professional experience”. In fact, professionals usually say that the greatest expert is a person who has committed all possible mistakes. A great Gamification design understands the importance of experience and designs situations for trainees to learn by trial and error.

2.4. Progress Assessment

Computer games generate virtual environments that evolve with player interactions over time. This evolution immediately informs players about their progress inside the game. Many computer games also include progress measures and feedback systems that constantly inform players about their statistics, achievements, awards, status and, in general sense, progress. Part of the engagement of players in games comes from their sense of progress [3]. Players build upon their own progress as their achievements encourage them to pursue next steps far beyond.
Many learning environments feel very different. Trainees attend lessons and then have to practice or study contents on their own. There is few or none feedback on their progress. Occasionally they can check if they manage to solve some exercises. However, this is radically different from having a constant feedback on progress and clear goals to next levels. One key element in this feedback are measures of already achieved success. Whenever players obtain any award or finish levels, they move on in their gaming adventure and their previous success is acknowledged. Their achievements are never removed, even if they fail afterwards [14]. Compare this with lessons in which a trainee can solve all proposed exercises but fail on the final exam. There is no progress at all because there is no assessment and acknowledgement. All that matters is the result of the final exam. And that is the reason why many trainees do not care about solving exercises during lessons. They only need to prepare them at the end and do well on their exam: progress does not matter at all.
In order to generate engagement and maintain interest, Gamification designs should include one or several forms of progress assessment. Moreover, designing for progress assessment also helps better designing incremental difficulty for challenge, as progress and difficulty are closely related [26]. Ideally, progress assessment should not be based on extrinsic rewards like points or badges. Intrinsic motivation requires trainees to be focused on learning goals per se. Too much emphasis on extrinsic rewards can change trainees’ focus, which would be detrimental for learning. For example, some trainees focus on passing subjects to get a degree without worrying about learning. Getting a degree is an extrinsic reward that eclipses their interest for learning and getting abilities. Consequently, using extrinsic rewards to assess progress should be done with care, ensuring that the main focus is always placed on learning goals.

2.5. Feedback

The most relevant difference between computer games and traditional learning environments lays in quantity, quality and rapidity of feedback response. Many good computer games act as simulations, which confers them similar properties to reality: players get immediate feedback response to any of their actions. This is a key point both for learning and engaging: immediate feedback. It is better understood with an example: imagine a child learning to play soccer. Every time the child kicks the ball, it reacts and moves depending on the kick. This feedback lets the child learn applied physics: the child learns to control movement, spin, momentum and force transmitted to the ball through kicking. Now imagine a delayed ball that reacts 24 h after being kicked. Learning how to kick the ball and get a desired reaction would require great patience and effort. Many trainees would rapidly desist, demotivated by such slowness, unable to effective learn. Appropriate, on-time feedback is crucial for both learning and engaging [3,12,13,26].
This is an important problem of many traditional learning environments. Many of the learning activities require to be assessed by a teacher. For instance, trainees solving math exercises wait until they receive teacher corrections. This is similar to the 24-h delayed ball of our previous example. A computer game designed this way would probably be played by no one. A designer would probably envision something more dynamic like “Sum Totaled” mini-game inside “Brain Age Express: Math” from Nintendo DSi [34] (see Figure 4). In this mini-game, monsters attack the player who can destroy them by adding the numbers on their bodies. The player has 3 lives that are lost every time a monster hits player’s avatar. The activity is based on adding numbers but its rules make it dynamic and the player gets constant, immediate feedback: enemies explode when the player writes down a correct answer and action continues uninterrupted otherwise.
Comparison between Sum Totaled game and the traditional addition practice in Figure 4 shows the importance of appropriate feedback. Both activities are mathematically the same (apart from their difference in difficulty) but trainees performing it traditionally will have no feedback stimuli to learn from. They will need to wait for teacher corrections. Moreover, game dynamics encourages trainees stop fearing failure and produce more answers, because being quick is crucial. This promotes learning from failure and has the potential to make learning more time/cost efficient.

2.6. Randomness

Randomness is a relevant factor for learning and engaging and links both of them together. In its most fundamental definition, learning is about discovering and modelling patterns and testing constructed models against reality through experimentation. This describes an iterative process for learning, in which engaging arises naturally when trainees constantly find new ways to refine and validate their mental models. An appropriate degree of randomness can keep trainees iterating longer, as their minds will continuously try to refine their models based on unexpected observations. Human minds are not well suited for dealing with probabilities, as they tend to model in terms of strong cause-effect relationships. This generally explains contexts like gambler’s fallacy or the hot hand fallacy [35]. These fallacies show us that randomness itself can be used to produce engagement. Therefore, it is quite relevant for games and Gamification designs.
Well designed randomness can provide a useful consequence: surprise. Surprise is one of the most desirable feelings both in learning and playing. Schell describes it as “so basic that we can easily forget about it” in Reference [13]. Schell also describes fun as “pleasure with surprises” and remembers that “Surprise is a crucial part of all entertainment - it is the root of humour, strategy and problem solving. Our brains are hard-wired to enjoy surprises”. In fact, surprise happens when observations are radically different from our mental models. The more unexpected the event that happens, the more information it carries: this is a natural consequence of the definition of entropy in information theory by Shannon [36]. This means that surprises are great sources of new information, which can potentially push trainees to revise their mental models, that is, to learn.
Consequently, it is key to consider appropriate uses of randomness in our Gamification designs to foster engagement and learning through surprise.

2.7. Discovery

As Koster states in Reference [12] a good game is one that “keeps the player learning”. One of the most important ways to keep players learning is presenting them with new content at an adequate rate. This renews interest in the game and keeps players eager to continue discovering more. It can also trigger surprise, depending on the nature of the new content and the way it is presented. However, discovery is as difficult to design as challenge. New content has to build up on previous content to balance novelty and familiarity. Similar to the flow channel (Figure 2), if a content is radically new it can easily be difficult to understand or accept. New information that cannot link to pre-existing mental models becomes similar to noise: no pattern can be found in the information and so it cannot be modelled and learnt. Some degree of familiarity is needed to help players understand, accept and enjoy new content but too much would eclipse novelty, making new content not feel new at all.
Games present basically two ways for new content delivery: discovery and unlocking. Unlocking works by asking players to perform some achievements to unlock new content. Typically, this means finishing some levels before being able to play new ones. The other way is by placing new content in such a way that players will discover it while playing. Discovery can be equivalent to unlocking by delivering same content at same rate. However, well designed discovery can produce better feelings on players, like surprise, reaffirmation and self-esteem raise. Moreover, discovery can also be designed non-linearly. Games can have secret content, not required to succeed, but present only for players that go beyond normal play. This is also an indirect way to reward players for their attention to detail, research or clever play. It is also an interesting way to convey rewards, as discovery would not be perceived as a reward but as a personal achievement. This has higher probability to foster intrinsic motivation.
Discovery is not commonly used in Gamification. This is probably due to the difficulty of content and activity design. Educational contexts tend to be linear and content is usually known beforehand. Trainees expect contents to be introduced first, then explained linearly. This relates to what we stated in Section 2.1: activities and contents are usually designed with a single correct path, expecting a concrete answer. To add discovery, designs require open spaces in which trainees decisions are relevant. Otherwise, discovery has no meaning at all. And this is the root of the difficulty for including discovery: it is difficult to design activities or content with proper open decision spaces. So, when willing to introduce discovery, it is advisable first to think about activities with open decision spaces.

2.8. Emotional Entailment

Emotions have been generally ignored in education. Educational context usually focus on factual content, leaning and methodologies for better learning. Everything tends to be concentrated on effectiveness of factual learning. Emotions are seldom considered. However, it is quite common for trainees to define their teachers in function of their feelings. Usual comments include “I like lessons from this teacher because they are X”, being X a qualifier like “fun”, “entertaining”, “approachable”, “kind”. Emotions are a key factor in all human relations and they also play a key role in learning and engaging. It is known that high intensity emotions produce long lasting memories. In fact, even before the term Gamification were coined, many studies targeted fun as a catalyst for learning [37,38,39].
Similar to movies, games cannot be successful without paying close attention to emotions. At the very least, a game is always expected to be fun. But fun, like any other emotion, is created inside player’s mind. In Schell’s words, “When people play games, they have an experience. It is this experience that the designer cares about. Without the experience, the game is worthless” [13]. The game itself is not the experience rather than the tool that enables it. That is what makes anything we create different for each person: the experience happens always in the mind of the player. And that makes it so important for games to be emotionally entailing, because they will attach to emotions in the mind of the player producing a much better and personal experience.
Gamification tends to use the same main tools games use to construct emotional entailment: characters, stories and aesthetics [13]. The problem usually lies in the complexity of these tools. All of them require great abilities and long periods of time when trying to mimic what commercial games do. This is too expensive and usually not cost/effective in educational environments. Simple approaches are preferred in this case: simple stories like “escape from enchanted house” or “disarm the bomb” could be enough for an emotional entailing environment. Trainees could be given freedom to create their own characters (like in role playing games) and aesthetics could be imaginary. Moreover, direct interaction between trainees could also help creating emotional entailment. Forming groups, sharing challenges and achieving common goals are preferred approaches in educational environments.

2.9. Playfulness Enabled

A playfulness enabled game refers to its versatility to be used as a toy. Games have goals, toys do not. Some game can be played without focusing on goals, in a playful way. Some examples include Minecraft, Grand Theft Auto series or Goat Simulator (see Figure 5) [40]. This kind of games are classified as sandbox or open-world.
On latest years game designers are paying more attention to playfulness as sandbox or open world games are increasingly more demanded. Reasons for this demand have already been pointed out: players have complete autonomy for developing their own creativity in vast open decision spaces, they can pursue their own goals, experiment by trial and error, create their own personal challenges and constantly discover what happens as a result of their actions. Clearly, these kind of games properly address many of the items in this rubric, including playfulness ability. Some of these games have goals also but they let and encourage players to do anything they like, pursue goals in any order or even forget about goals and just explore and experiment. This is how these games become toys: they can be played in almost any imaginable way by players, similar to children playing invented games with a ball. The ball is a just a toy that can be used for any kind of play.
Playfulness is absent most of the time in educative environments. However, it is present on research and development environments. In fact, most of present discoveries in many branches of knowledge came out of experimental approaches. These approaches are completely similar to the playfulness nature of toys. Research has no intrinsic goals in general: it emerges from raw questions. Researchers are presented with current evidence and they ask questions about why or how things happen. That leads to experiments to seek answers and this gives new evidence. Evidence then gives ideas to developers to create. And all this cycle is driven by curiosity. Therefore, it could be considered a playfulness approach, as it is completely similar.
This characteristic is highly desirable in our educative environments. Curiosity is the most important driver for knowledge and a playfulness enabled environment fosters curiosity and experimentation. However, there is a great challenge involved. As teachers we usually design from knowledge to activities. This implies that activities are designed to practice and acquire some concrete knowledge or abilities. So activities tend to be the opposite of sandboxes: they are usually focused on some finite set of goals and give little or no space to trainees for experimentation or play. Great Gamification designs should change this focus and seek on producing sandbox-like activities.

2.10. Automation

One of the main differences in Figure 4 is due to automation. Characteristics like feedback, challenge or randomness are greatly affected by the level of automation. A game like Brain Age cannot exist without automation. Similar games can be made, even in manual contexts but they will be different to Brain Age.
Interaction based on the immediate feedback from a computer game generates great amounts of information per second. Players’ brain subconsciously analyse cause/effect relations between this information and their input interactions. This fosters adaptations in players’ brains as they advance practicing and mastering the game. This practical learning also happens on sports, which are real-life games outside a computer support.
When referring to a gamified activity, automation defines the level of human intervention required to produce responses to trainee’s inputs. It also refers to the need of human intervention to enforce the rules. Computer games automatically process all inputs from players, give immediate responses and enforce the rules without any human intervention. By contrast, a group of players of a board game have to do all this processing: throwing dices, counting, interpreting rules, changing status of the game and so forth. Exactly the same happens with tests, exams or manual classroom activities.
Therefore, there are two relevant differences between manual and automated activities with respect to learning: the stream of information generated and the immediacy of the responses to input interactions. Both have been discussed in previous characteristics like feedback, challenge or learning by trial and error and have great impact on learning outcomes. For Gamification this means that automation should be sought always when possible. However, not all contexts are easy or viable to automate, nor every automation has to include computers. If we consider soccer, most of the game is automated. A referee is required to enforce the rules but most of its interactions are performed by the field, the ball, the goals and the players. In fact, a fan soccer match can be played without a referee. Same happens to other games and sports. This shows that some great level of automation can be achieved with appropriate real-life designs.

3. The Rubric

Table 1 shows the game-design-based rubric with the ten selected characteristics, their assessment criteria and assigned scores. The rubric has ten rows, one for each characteristic. Each row is divided into three columns that hold the criteria to assign scores from zero to two. For each characteristic, the given score will be at the top of the column that contains the criteria that more accurately describes the design. For simplicity, only a integer score is assignable to each characteristic.
The rubric has been designed as an instrument and so it meets the requirement to fit in a single page. To accomplish this, criteria have been written with a few simple words. This makes them simpler, more direct but less detailed and specific. It is advised to understand written criteria as a general contextual description. They are thought to be complemented with more detailed descriptions from Section 2. Also, criteria are written with three or four sentences per cell. For an appropriate application, they should not be considered a check-list: depending on the design being assessed they could even be not applicable as they are written. These sentences should better be considered as a description of general observable symptoms from designs that meet the criteria. This is a consequence of designs being artistic in nature: strict objective descriptions would not be applicable most of the time.
Two main outcomes arise from the use of the rubric as proposed: first, the rubric is an easy to use instrument for assessing strengths and weaknesses of designs with respect to their game-like characteristics. Second, the knowledge of strengths and weaknesses helps thinking in ways to improve designs, creating a feedback cycle of analysis and improvement. These goodnesses are limited by the subjective nature of the rubric and the discretization it imposes over the analysis space to only ten characteristics and three scores. However, these limitations are acceptable and even desirable in the selected context of helping new practitioners to overcome the lack-of-experience entry barrier.

4. Rubric Application Samples

As an initial piece of evidence, we show four samples of application of the rubric to four different activities in two blocks: Section 4.1 and Section 4.2 analyse the Super Mario Bros game and a unsuccessfully gamified 16-week course described in literature. Section 4.3 and Section 4.4 analyse the activity of solving a single system of linear equations and a gamified activity designed based on linear equations systems solving. The first pair of examples shows how the rubric compares a successful game with an unsuccessful gamification expecting a great difference in their scores. The second pair shows how the rubric measures the difference between a single classic learning activity and a gamification design produced with the items of the rubric in mind. This gives an idea on how the rubric could be used to help practitioners create and improve their initial designs.
This small piece of evidence does not prove the general validity of the rubric but yields a initial hint. More support evidence is required in any case to validate or discard the rubric.

4.1. Super Mario Bros (NES)

Next we will assess the game Super Mario Bros [30] (see Figure 1). The player controls Mario, a plumber in an imaginary world that has to save his princess from an enemy. For that, Mario has to surpass many perils and enemies in a series of levels. Mario’s abilities include running and jumping, getting inside pipes, breaking blocks with a punch from below and firing. Super Mario Bros is classified as an action-platformer game: most of the time Mario has to jump from platform to platform to surpass the perils.
Super Mario Bros is considered one of the most played games in videogame history. Millions of people have played either the original game or any of its successors. Let us apply the rubric and confirm if this popularity correlates with its score:
  • 2 ]Open Decision Space. The game lets the user take movement decisions (actions) in a continuous world. Taking any two players that successfully finish one level, it is almost impossible that both of them perform the exact same actions. Player is in total control of the action having potentially infinite options in a continuous space.
  • 2 ]Challenge. The game is composed of a series well designed levels to challenge players. Difficulty progression is sinusoidal, with some easier levels after more challenging ones. It is balanced and tested by designer intuition through iterations.
  • 2 ]Learning by trial and error. As many games, learning by trial and error is in the very core of the game. Failure is permitted with a number of lives in one game but there is no limit of games per player. The player can complete the game regardless of the number of games or lives lost to learn. Even level design is thought to encourage players to learn by experimenting.
  • 1 ]Progress assessment. The game assesses the progress of the player through levels and player status. Whenever a level is finished, the player does not repeat it even if lives are lost. Inside a level, the player always knows how to continue to achieve the end and feedback through movement, points, enemies and music reports the progress.
  • 2 ]Feedback. Similar to most action-platformer games, there are sixty frames per second of continuous cause/effect feedback that lets the player sense control and learn. Moreover, game design informs of all events happening such us lives lost, enemies beaten, objects obtained and so forth.
  • 1 ]Randomness. Although the game is predictable, with no actual random events happening, there are some enemies with elaborated movements that give the player some sense of unpredictability.
  • 2 ]Discovery. Players discover new levels and worlds as they finish previous ones, in an unlock-like fashion. There are secret places, items and bonuses at different locations that reward players for their attention to detail and exploration. Also, there are some special behaviours of game elements that can be discovered by experimentation.
  • 2 ]Emotional entailment. The complete game creates an emotional experience for the player with the aesthetics, characters, music and the story. It is completely conceived as an adventure in an imaginary world where characters live and become “real” in some sense for the player.
  • 1 ]Playfulness enabled. Although the game has clear goals and rules, players have room to explore and be creative. In fact, communities of players have engaged in new challenges like the speed-run modalities, creating new rules on top of the game. The game was not thought to be played as a toy but players can and use it this way.
  • 2 ]Automation. As a console game, meant to be played at home, the game is fully automated. Feedback is immediate and all rules are enforced automatically.
According to the rubric, Super Mario Bros has an score of 2 + 2 + 2 + 1 + 2 + 1 + 2 + 2 + 1 + 2 = 17 points, which is quite reasonable for such a well known and played game.

4.2. Unsuccessful 16-Week Gamified Semester

As a more elaborated example we will apply the rubric to the Gamification study by D. Hanus et al. [41]. A class was divided into two groups. The control group received normal lessons, materials, assignments and exams. The experimental group was given same content than the control group but in a standard gamified fashion including badges, leaderboards and incentive systems. Badges were given as a reward for positive behaviours like interaction with class materials, study in pairs in the library or handing assignments early. There also was a badge for entering lessons dressed up like a videogame character. In addition to badges, students also earned coins for small contributions to class discussions or sharing interesting information. Students could use coins to earn some class benefits like extension on a paper.
Students were required to obtain some mandatory badges but coins were optional. The leaderboard was ordered by number of badges obtained, with students using pseudonyms. The leaderboard was updated weekly.
The description of the system is too broad, which produces a great level of noise. Therefore, the final score from the rubric should be taken with care. Adding a big error bar to the final result is advisable to compensate the noise. In raw application, the rubric yields these scores:
  • 1 ]Open Decision Space. Although badges are reported as mandatory, some seem to be optional like coins. Also students can decide how to use earned coins. However, this seems like a small set of options with not much strategy involved.
  • 0 ]Challenge. No description of badges seems to match with levels of difficulty or abilities required. They are focused on behaviour. There seems to be no consideration about difficulty.
  • 0 ]Learning by trial and error. No consideration of opportunities, number of assignments or punishments.
  • 1 ]Progress assessment. Number of badges earned, coins and the leaderboard give some form of progress assessment.
  • 1 ]Feedback. Feedback seems to come mainly from teacher and the leaderboard is updated once a week. Therefore, feedback seems rather slow.
  • 0 ]Randomness. Everything described in the system seems concretely specified, with no room for surprises or random events.
  • 0 ]Discovery. Similarly, as everything seems predefined, there is nothing to unlock or discover.
  • 0 ]Emotional entailment. There are no characters, no story, no aesthetics. The only content that could be related to emotions is the badge for dressing up like a videogame character.
  • 0 ]Playfulness enabled. Similarly, description of the system does not involve any ability to use activities as toys or even play with strategies. Some creativity could be exhibited with the videogame character dressing badge or in the way to expend coins.
  • 0 ]Automation. Nothing is automated. Even badges have to be claimed by students by filling up forms. However, the leaderboard being updated weekly can be perceived as some small form of automation by students.
The rubric gives a final score of 3 points for this Gamification design. Even considering an important error bar up to 100 % , maximum value would be 6 points, really far from the 17 points obtained by Super Mario Bros. It clearly appears not to be enough to induce important motivational changes on students.
This analysis supports results obtained by D. Hanus et al. [41], who concluded that the Gamification methods they used had no positive impact on learning and could even harm student motivation. As the 3 points obtained are far lower compared to Super Mario Bros, a much inferior motivational level could be expected. This result is supportive of what D. Hanus et al. found in their study.

4.3. Single Learning Activity: Solving a Linear-Equations System

To establish a comparison with a common learning activity, we will now apply the rubric to a linear-equations system exercise. Let us consider a trainee solving the system on paper and handing it to the teacher. A week afterwards, the trainee receives the exercise assessed. These would be the rubric scores:
  • 1 ]Open Decision Space. Some minimal decisions can be considered regarding the solution method and the order in which to perform steps.
  • 0 ]Challenge. It is a single activity, so there is no way to match activity with ability. Difficulty is fixed.
  • 0 ]Learning by trial and error. While the trainee produces its solution there is no feedback, no way to know if decisions are good or bad. Therefore, no way to cause/effect learn.
  • 0 ]Progress assessment. The only perceivable progress would be the steps done towards the solution but that is no form of progress assessment.
  • 0 ]Feedback. There is no feedback response to actions and teacher feedback takes one week. Cause/effect learning is almost impossible.
  • 0 ]Randomness. Everything is completely predictable and there are no surprises.
  • 0 ]Discovery. As all content is fixed, there is no unlocking or discovery at all.
  • 0 ]Emotional entailment. There are no characters, no story, no aesthetics, no content that could be related to emotions.
  • 1 ]Playfulness enabled. There is some room to experiment with procedures or methods, but very limited.
  • 0 ]Automation. Everything is manually performed, with no automation at all and a slow response time (one week).
This gives 2 points of final score for the learning activity in isolation. It clearly contrasts with Super Mario Bros and shows a strong difference. Similar strong difference is usually perceived on student motivation on these two activities. Both scores seem intuitively correlated with this general perception.

4.4. Gamified Version of the Linear-Equations Solving Activity

Now we consider a explicit gamification design created for the activity of solving linear-equations. This activity was presented by Llorens et al. [42]: here we present a summary of the design along with its evaluation using the rubric. For complete details on the design please refer to Reference [42]. Basically, Llorens et al. propose these changes to the activity:
  • Create an automatic generator of linear-equation systems to present students with hundreds of exercises instead of one.
  • Classify generated systems into 6 levels of difficulty depending on their intrinsic characteristics, number of variables and numerical complexity.
  • Form teams during solving sessions and have rules to require teams and individuals to develop strategies to distribute tasks and face challenges.
  • Give points to valid solutions depending on the assigned difficulty of the system.
  • Make difficulty levels unlockable and have clear unlocking rules to force them to appropriately master levels before proceeding.
  • Have a student experience level (XP) that increases as students solve systems and successfully resolve proposed activities. Define experience levels and use them as a measure to form teams and unlock difficulty levels.
  • Spread the activity across many sessions and maintain points, experience and levels. Let students evolve over the course.
  • Produce random events that interrupt sessions and change rules surprisingly. Examples: A fleeting system that has to be solved fast, a red-code event in which students have to deactivate a bomb or a dizzy time during which solutions have to be given inverted.
  • Define a set of achievements to give to students, including some secret ones to reward their research or detailed abilities.
  • Automatize all the system with an application that lets students select systems, send solutions and receive instant status reports with their mobile phones.
Let us now compare the evaluation of this gamified version to the single linear-equations system solving activity:
  • 1 ]Open Decision Space. Students have the freedom to define different strategies to solve tasks and challenges based on linear-equations systems.
  • 1 ]Challenge. There are different difficulty levels defined as progressive and linear.
  • 2 ]Learning by trial and error. Students are limited by factors like time during sessions but not by their mistakes. They can fail many times and continue, not limiting their final score.
  • 2 ]Progress assessment. There are several measures like experience points, regular points, levels, achievements and unlocked difficulties that give students great detail on their progress.
  • 1 ]Feedback. Students receive feedback from the system with respect to their solutions and actions. They see their progress and know if they have done right or wrong and they also can fix their failures. Feedback is not complete and sometimes cause-effect relationships maybe diffuse.
  • 2 ]Randomness. Systems are generated, so randomness is present most of the time in the system. Moreover, random events are another source of purposively designed unpredictability for students, which induces surprises.
  • 1 ]Discovery. There is some unlockable content and some secret levels and achievements, but design could be improved to include more surprises and learning through discovery.
  • 1 ]Emotional entailment. Random events are based on simple stories like deactivating a bomb, for instance. Also, time limitations and surprises target emotions but there is a lack of a general story, some characters and appropriate aesthetics.
  • 0 ]Playfulness enabled. There is a small subset of creativity involved in the way teams can approach tasks, but goals are clearly defined and there is not much room for free playing outside the rules of the system.
  • 2 ]Automation. A mobile app with a server give a moderate level of automation and control with minimal manual intervention required. Feedback is immediate to responses, although it might be not detailed or complete. However, this last part is improbable with new versions of the app.
This gamified version of the activity gets a score of 1 + 1 + 2 + 2 + 1 + 2 + 1 + 1 + 0 + 2 = 13 points in total. Improvement over the 2 points obtained by the single activity is clear, even considering possible variable criteria interpreting the rubric that could lead to a reduction of some points. In this sense, the rubric also shows potential for helping practitioners to improve their designs: the proposed design could have been made by looking at the items of the rubric and including ideas to improve each one of them. An experience designer would probably consider the design as a whole product instead of a separate set of characteristics. However, the set of separate characteristics is much more easily manageable for a trainee and can easily lead to initial designs like the one proposed here.

5. Conclusions

In this paper we have presented a rubric as an instrument to help new Gamification practitioners assessing designs. Its aim is lowering the entry barrier to get experience in game-design-based Gamification. Experience is obtained through practice but practice without guidance is much more difficult and frustrating. The rubric gives this guidance.
The rubric can be used to assess a given design, to analyse strengths and weaknesses and to highlight areas for improvement. This uses are focused on helping practitioners learn and develop experience on game-design-based Gamification.
Due to the artistic nature of Game Design, the rubric is conceived on previous experience from authors and works from experts. The rubric itself is conceived with flexible and interpretable criteria to fit all subtle perceptional details from games and Gamification experiences. This is both a limitation and a strength: it cannot provide objective assessment but it allows considering emotions and player experiences, which are key for a successful Gamification design.
As practical experience is the main basis for successful game-design-based Gamification designs, the rubric is probably be far too simplified for experienced practitioners. This is an intended limitation, as its focus is to help new practitioners.
Four samples of application of the rubric have been shown in two pairs: to Super Mario Bros game and a 16-week gamified course presented by D. Hanus et al. [41] and to a linear-equations solving exercise and a gamified version of the linear-equations solving exercise proposed by Llorens et al. [42]. The four applications yield consistent results with previous evidence and general perception. Although much more evidence is required to assess the general validity of the rubric, this piece of evidence encourages further testing and analysis.

Author Contributions

Conceptualization, F.J.G.-D., C.J.V.-A., R.M.-C. and F.L.-L.; Investigation, R.S.-C. and P.C.-R.; Writing—original draft, F.J.G.-D. and C.J.V.-A.; Writing—review & editing, F.J.G.-D., C.J.V.-A., R.M.-C. and F.L.-L.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From Game Design Elements to Gamefulness: Defining “Gamification”. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, Tampere, Finland, 28–30 September 2011; ACM: New York, NY, USA, 2011; pp. 9–15. [Google Scholar] [CrossRef]
  2. Nacke, L.E.; Deterding, S. The maturing of gamification research. Comput. Hum. Behav. 2017, 71, 450–454. [Google Scholar] [CrossRef]
  3. Linehan, C.; Kirman, B.; Lawson, S.; Chan, G. Practical, Appropriate, Empirically-validated Guidelines for Designing Educational Games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; ACM: New York, NY, USA, 2011; pp. 1979–1988. [Google Scholar] [CrossRef]
  4. Nah, F.F.H.; Zeng, Q.; Telaprolu, V.R.; Ayyappa, A.P.; Eschenbrenner, B. Gamification of Education: A Review of Literature. In Lecture Notes in Computer Science; Nah, F.F.H., Ed.; Springer International Publishing: Cham, Switzerland, 2014; pp. 401–409. [Google Scholar] [CrossRef]
  5. Hamari, J.; Koivisto, J.; Sarsa, H. Does Gamification Work?—A Literature Review of Empirical Studies on Gamification. In Proceedings of the 47th Hawaii International Conference on System Sciences, Waikoloa, HI, USA, 6–9 January 2014. [Google Scholar] [CrossRef]
  6. Fitz-Walter, Z.; Johnson, D.; Wyeth, P.; Tjondronegoro, D.; Scott-Parker, B. Driven to drive? Investigating the effect of gamification on learner driver behavior, perceived motivation and user experience. Comput. Hum. Behav. 2017, 71, 586–595. [Google Scholar] [CrossRef]
  7. Kocakoyun, S.; Ozdamli, F. A Review of Research on Gamification Approach in Education. In Socialization; Morese, R., Palermo, S., Nervo, J., Eds.; IntechOpen: Rijeka, Croatia, 2018; Chapter 4. [Google Scholar] [CrossRef] [Green Version]
  8. Koivisto, J.; Hamari, J. The rise of motivational information systems: A review of gamification research. Int. J. Inf. Manag. 2019, 45, 191–210. [Google Scholar] [CrossRef]
  9. Werbach, K. (Re)Defining Gamification: A Process Approach. In Persuasive Technology; Spagnolli, A., Chittaro, L., Gamberini, L., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 266–272. [Google Scholar] [CrossRef]
  10. Chou, Y. Actionable Gamification: Beyond Points, Badges, and Leaderboards; Createspace Independent Publishing Platform: Milpitas, CA, USA, 2015. [Google Scholar]
  11. Deterding, S. Eudaimonic Design, or: Six Invitations to Rethink Gamification. In Rethinking Gamification; Fuchs, M., Fizek, S., Ruffino, P., Schrape, N., Eds.; Meson Press: Luneburg, Germany, 2014; pp. 305–331. [Google Scholar]
  12. Koster, R. A Theory of Fun for Game Design; Paraglyph Press: Scottsdale, AZ, USA, 2004. [Google Scholar]
  13. Schell, J. The Art of Game Design: A Book of Lenses; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2008. [Google Scholar]
  14. Gee, J.P. What Video Games Have to Teach Us About Learning and Literacy. Second Edition: Revised and Updated Edition; Palgrave Macmillan: New York, NY, USA, 2007; OCLC: ocn172569526. [Google Scholar]
  15. Gee, J.P. Good Video Games and Good Learning; Peter Lang Inc., International Academic Publishers: New York, NY, USA, 2007. [Google Scholar]
  16. Kreimeier, B. The Case For Game Design Patterns. Gamasutra Featured Article. Gamasutra, 2002. Available online: https://www.gamasutra.com/view/feature/132649/the_case_for_game_design_patterns.php (accessed on 4 November 2019).
  17. Lindley, C.A. Game Taxonomies: A High Level Framework for Game Analysis and Design. Gamasutra Featured Article. Gamasutra, 2003. Available online: https://www.gamasutra.com/view/feature/131205/game_taxonomies_a_high_level_.php (accessed on 4 November 2019).
  18. Hunicke, R.; Leblanc, M.; Zubek, R. MDA: A Formal Approach to Game Design and Game Research; AAAI Workshop: Challentes In Game Artificial Intelligence, Volume 1; Technical Report; AAAI: Menlo Park, CA, USA, 2004. [Google Scholar]
  19. Reeves, B.; Read, L. Total Engagement: Using Games and Virtual Worlds to Change the Way People Work and Businesses Compete; Harvard Business Review Press: Boston, MA, USA, 2009. [Google Scholar]
  20. Tondello, G.F.; Kappen, D.L.; Mekler, E.D.; Ganaba, M.; Nacke, L.E. Heuristic Evaluation for Gameful Design. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts, Austin, TX, USA, 16–19 October 2016; ACM: New York, NY, USA, 2016; pp. 315–323. [Google Scholar] [CrossRef] [Green Version]
  21. Rapp, A. Designing interactive systems through a game lens: An ethnographic approach. Comput. Hum. Behav. 2017, 71, 455–468. [Google Scholar] [CrossRef]
  22. Strawson, P.F.; Wittgenstein, L. Philosophical Investigations. Mind 1954, 63, 70. [Google Scholar] [CrossRef]
  23. Bogost, I. Persuasive Games: Exploitationware. Gamasutra, 2011. Available online: http://www.gamasutra.com/view/feature/6366/persuasive_games_exploitationware.php (accessed on 29 June 2019).
  24. Desurvire, H.; Wiberg, C. Game Usability Heuristics (PLAY) for Evaluating and Designing Better Games: The Next Iteration; Online Communities and Social Computing; Springer: Berlin/Heidelberg, Germany, 2009; pp. 557–566. [Google Scholar] [CrossRef]
  25. Tromp, N.; Hekkert, P.; Verbeek, P.P. Design for Socially Responsible Behavior: A Classification of Influence Based on Intended User Experience. Des. Issues 2011, 27, 3–19. [Google Scholar] [CrossRef]
  26. Linehan, C.; Bellord, G.; Kirman, B.; Morford, Z.H.; Roche, B. Learning Curves: Analysing Pace and Challenge in Four Successful Puzzle Games. In Proceedings of the First ACM SIGCHI Annual Symposium On Computer-Human Interaction in Play—CHI PLAY, Toronto, ON, Canada, 19–21 October 2014; ACM Press: New York, NY, USA, 2014; pp. 181–190. [Google Scholar] [CrossRef]
  27. Llorens-Largo, F.; Gallego-Durán, F.J.; Villagrá-Arnedo, C.J.; Compañ Rosique, P.; Satorre-Cuerda, R.; Molina-Carmona, R. Gamification of the Learning Process: Lessons Learned. IEEE Rev. Iberoamericana Tecnol. Aprendizaje 2016, 11, 227–234. [Google Scholar] [CrossRef] [Green Version]
  28. Villagrá-Arnedo, C.; Gallego-Durán, F.J.; Molina-Carmona, R.; Llorens-Largo, F. PLMan: Towards a Gamified Learning System. In Learning and Collaboration Technologies; Zaphiris, P., Ioannou, A., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 9753, pp. 82–93. [Google Scholar]
  29. Deci, E.L.; Ryan, R.M. Handbook of Self-Determination Research; University Rochester Press: Rochester, NY, USA, 2004. [Google Scholar]
  30. Demaine, E.D.; Grandoni, F. (Eds.) Super Mario Bros. is Harder/Easier Than We Thought, Leibniz International Proceedings in Informatics (LIPIcs); Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik: Dagstuhl, Germany, 2016; Volume 49. [Google Scholar] [CrossRef]
  31. Nacke, L.; Lindley, C.A. Flow and Immersion in First-person Shooters: Measuring the Player’s Gameplay Experience. In Proceedings of the 2008 Conference on Future Play: Research, Play, Share, Toronto, ON, Canada, 3–5 November 2008; ACM: New York, NY, USA, 2008; pp. 81–88. [Google Scholar] [CrossRef]
  32. Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience; Harper Perennial: New York, NY, USA, 1991. [Google Scholar]
  33. Selye, H. The general adaptation syndrome and the diseases of adaptation. J. Allergy Clin. Immunol. 1946, 17, 231–247. [Google Scholar] [CrossRef]
  34. Forget, A.; Chiasson, S.; Biddle, R. Lessons from Brain Age on persuasion for computer security. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 4435–4440. [Google Scholar] [CrossRef]
  35. Ayton, P.; Fischer, I. The Hot Hand Fallacy and the Gambler’s Fallacy: Two faces of Subjective Randomness? Mem. Cognit. 2005, 32, 1369–1378. [Google Scholar] [CrossRef] [PubMed]
  36. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  37. Prensky, M. Digital Game-Based Learning; McGraw-Hill: New York, NY, USA, 2001. [Google Scholar]
  38. Prensky, M. Don’t Bother Me Mom–I’M Learning! Paragon House Publishers: St.Paul, MN, USA, 2006. [Google Scholar]
  39. Huizinga, J. Homo Ludens: A Study of the Play-Element in Culture; Beacon Press: Boston, MA, USA, 1955. [Google Scholar]
  40. Jensen, L.J.; Barreto, D.; Valentine, K.D. Toward Broader Definitions of “Video Games”: Shifts in Narrative, Player Goals, Subject Matter, and Digital Play Environments. In Examining the Evolution of Gaming and Its Impact on Social, Cultural, and Political Perspectives; IGI Global: Hershey, PA, USA, 2016; pp. 1–37. [Google Scholar] [CrossRef]
  41. Hanus, M.D.; Fox, J. Assessing the effects of gamification in the classroom: A longitudinal study on intrinsic motivation, social comparison, satisfaction, effort, and academic performance. Comput. Educ. 2015, 80, 152–161. [Google Scholar] [CrossRef]
  42. Llorens-Largo, F.; Molina-Carmona, R.; Gallego-Durán, F.J.; Villagrá-Arnedo, C.J. Guía para la Gamificación de Actividades de Aprendizaje. Novatica 2018. Available online: https://www.novatica.es/guia-para-la-gamificacion-de-actividades-de-aprendizaje (accessed on 4 November 2019).
Figure 1. Start of the first level of Super Mario Bros game. There are virtually infinite possible decisions, in the form of movement sequences. Players can press up to 60 combinations of inputs per second.
Figure 1. Start of the first level of Super Mario Bros game. There are virtually infinite possible decisions, in the form of movement sequences. Players can press up to 60 combinations of inputs per second.
Informatics 06 00049 g001
Figure 2. (Left) The flow channel. (center) Linear incremental difficulty design that perfectly matches abilities. (right) Rhythmic incremental difficulty design.
Figure 2. (Left) The flow channel. (center) Linear incremental difficulty design that perfectly matches abilities. (right) Rhythmic incremental difficulty design.
Informatics 06 00049 g002
Figure 3. General Adaptation Syndrome. Challenging tasks produce stress, test available abilities and yield failures. Present abilities are improved as resistance, then new ones are developed as super-compensation. Relaxing helps fixing acquired abilities, as continued stress ends up in exhaustion.
Figure 3. General Adaptation Syndrome. Challenging tasks produce stress, test available abilities and yield failures. Present abilities are improved as resistance, then new ones are developed as super-compensation. Relaxing helps fixing acquired abilities, as continued stress ends up in exhaustion.
Informatics 06 00049 g003
Figure 4. (Left) Brain Age 2. Sum Totaled. Player writes ’12’ (7 + 5) to destroy the monster on the top before it fells down and takes one of the three lives. (Right) Traditional addition practice.
Figure 4. (Left) Brain Age 2. Sum Totaled. Player writes ’12’ (7 + 5) to destroy the monster on the top before it fells down and takes one of the three lives. (Right) Traditional addition practice.
Informatics 06 00049 g004
Figure 5. (Left) An example world constructed in Minecraft. Similar to Lego™blocks, no rule forces players to build anything specific. Creations come out of personal will, just because the game allows them. (Right) In Goat Simulator there is no specific reward for jumping over an ultralight but players do it because they can and it is fun: it is a way of experimenting, just like in the real world.
Figure 5. (Left) An example world constructed in Minecraft. Similar to Lego™blocks, no rule forces players to build anything specific. Creations come out of personal will, just because the game allows them. (Right) In Goat Simulator there is no specific reward for jumping over an ultralight but players do it because they can and it is fun: it is a way of experimenting, just like in the real world.
Informatics 06 00049 g005
Table 1. Ten-characteristic game-design-based gamification rubric.
Table 1. Ten-characteristic game-design-based gamification rubric.
Characteristic012
Open
Decision
Space
Not open
No real decisions to take
Only Correct/Incorrect
Decision-tree like
Designed decision space
With options but limited
Completely open
Multiple/Infinite options
Continuous decision spaces
ChallengeSingle difficulty/activity
No activity-ability match
Punishments prevent
beneficial attempts
Incremental difficulty
Speculative Design
Subjective matching
Subjective measures
Sinusoidal
difficulty progression
Designed
activity-ability match
Measured, balanced, tested
Learning by Trial and ErrorFailure punished
Max.Marks only
achievable without
failure
Failure permitted
Max.Marks achievable
with some failures
Failure encouraged
for learning
Max.Marks achievable
independent of failures
Progress AssessmentNo progress measures
No feedback on progress
Some progress
measures defined
Some feedback
on status/progress
Lack of precision
All progress defined
All progress measured
Detailed feedback
on status/progress
Next steps are clear
FeedbackNone/minimal feedback
response to actions
Cause-effect learning is
difficult/impossible
Some feedback response
Some actions w/feedback
Feedback not immediate
Some cause-effect
learning is possible
All actions produce
cause-effect feedback
Feedback immediate or
timely adequate
Cause-effect learning
RandomnessEverything is predictable
No randomness involved
No surprises
Some unpredictability
Some random events or
parts of activities
Speculative/casual
design of random parts
Measured unpredictable
content and random parts
of activities
Purposively designed
Surprises included,
designed and balanced
DiscoveryNo new content
No discovery
No unlocking
Content is fixed
Activities presents new
content on progress
Some unlockable content
New content does not
deliver surprises
New content is presented
at a measured pace
Discoverable content
rewards user interest
Surprises on discovery
Emotional EntailmentNo design that targets
emotions
No characters, stories or
aesthetics
Focus on factual content
Some form of design to
target emotions
Use of template stories
characters or aesthetics
Imaginary experiences
Specifically-designed
characters, stories and/or
aesthetics
Design focuses on creating
an emotional experience
Playfulness EnabledConcrete goals
Specific procedures
No room to experiment
No curiosity generated
Selectable goals and/or
procedures
Room for development
of personal creations
Optional activities with
creative component
Selectable/generable goals
Creative procedures
Users may play with goals,
content and procedures
in non-predesigned ways
Curiosity rewarded
AutomationNo automation
Manual intervention
All or most of the rules
are manually enforced
Slow feedback
response time
Some level of automation
Optimized manual
intervention
Rules are partly enforced
on an automatic way
Improved feedback
response time
Everything automated
None or minimal manual
intervention required
Rules are/can be enforced
automatically
Immediate or fastest
feedback response time

Share and Cite

MDPI and ACS Style

Gallego-Durán, F.J.; Villagrá-Arnedo, C.J.; Satorre-Cuerda, R.; Compañ-Rosique, P.; Molina-Carmona, R.; Llorens-Largo, F. A Guide for Game-Design-Based Gamification. Informatics 2019, 6, 49. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6040049

AMA Style

Gallego-Durán FJ, Villagrá-Arnedo CJ, Satorre-Cuerda R, Compañ-Rosique P, Molina-Carmona R, Llorens-Largo F. A Guide for Game-Design-Based Gamification. Informatics. 2019; 6(4):49. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6040049

Chicago/Turabian Style

Gallego-Durán, Francisco J., Carlos J. Villagrá-Arnedo, Rosana Satorre-Cuerda, Patricia Compañ-Rosique, Rafael Molina-Carmona, and Faraón Llorens-Largo. 2019. "A Guide for Game-Design-Based Gamification" Informatics 6, no. 4: 49. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics6040049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop