Next Article in Journal
A Strategy-Based Model for Low Carbon Cities
Next Article in Special Issue
Sustainable Urban Development System Measurement Based on Dissipative Structure Theory, the Grey Entropy Method and Coupling Theory: A Case Study in Chengdu, China
Previous Article in Journal
Exploring Driving Forces of Sustainable Development of China’s New Energy Vehicle Industry: An Analysis from the Perspective of an Innovation Ecosystem
Previous Article in Special Issue
Evaluation on Construction Level of Smart City: An Empirical Study from Twenty Chinese Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

Rethinking Performance Gaps: A Regenerative Sustainability Approach to Built Environment Performance Assessment

1
Sustainable Built Environment Performance Assessment Network, The John H. Daniels Faculty of Architecture, Landscape and Design, University of Toronto, Toronto, ON M5S 2J5, Canada
2
Civil & Mineral Engineering and Mechanical & Industrial Engineering, University of Toronto, Toronto, ON M5S 1A4, Canada
3
Munk School of Global Affairs & Public Policy, University of Toronto, Toronto, ON M5S 3K7, Canada
4
School of the Environment, University of Toronto, Toronto, ON M5S 3K7, Canada
5
Department of Architectural Science, Ryerson University, Toronto, ON M5S 1A4, Canada
*
Author to whom correspondence should be addressed.
Sustainability 2018, 10(12), 4829; https://0-doi-org.brum.beds.ac.uk/10.3390/su10124829
Submission received: 7 November 2018 / Revised: 13 December 2018 / Accepted: 14 December 2018 / Published: 18 December 2018

Abstract

:
Globally, there are significant challenges to meeting built environment performance targets. The gaps found between the predicted performance of new or retrofit buildings and their actual performance impede an understanding of how to achieve these targets. This paper points to the importance of reliable and informative building performance assessments. We argue that if we are to make progress in achieving our climate goals, we need to reframe built environment performance with a shift to net positive goals, while recognising the equal importance of human and environmental outcomes. This paper presents a simple conceptual framework for built environment performance assessment and identifies three performance gaps: (i) Prediction Gap (e.g., modelled and measured energy, water consumption); (ii) Expectations Gap (e.g., occupant expectations in pre- and post-occupancy evaluations); and, (iii) Outcomes Gap (e.g., thermal comfort measurements and survey results). We question which of measured or experienced performance is the ‘true’ performance of the built environment. We further identify a “Prediction Paradox”, indicating that it may not be possible to achieve more accurate predictions of building performance at the early design stage. Instead, we propose that Performance Gaps be seen as creative resources, used to improve the resilience of design strategies through continuous monitoring.

1. Introduction

The climate change goals being adopted by national and subnational jurisdictions around the world imply the need for substantial reductions in energy use in the buildings sector [1,2,3]. At the COP23 conference in Bonn, Germany in November 2017, mayors from 25 cities around the world, representing 150 million citizens, pledged to cut their carbon emissions to net zero by 2050 [4]. At the same meeting, the Global Covenant of Mayors for Climate and Energy (representing 7494 cities worldwide) released a report indicating that the potential for reduction by 2030 by these Global Covenant cities is nearly 1.3 billion tons of CO2e emissions per year. (Current emissions in those cities are about 3.5 billion tons.) [5]
The implications of these and other targets for the buildings sector are immense, and have perhaps not been fully recognized in the building community. As one illustrative example, in 2007, the City of Toronto, Canada’s largest city, with a population of 2.8 million people, adopted a target of an 80% reduction in GHG emissions from 1992 levels by 2050 (City of Toronto, 2007). In July 2017, City Council approved more specific targets as part of the new TransformTO climate policy [6,7], including: 100 percent of new buildings are to be designed and built to be near zero greenhouse gas emissions by 2030; and 100 percent of existing buildings are to be retrofitted to the highest emission reduction technically feasible, on average achieving a 40 percent energy performance improvement over 2017 levels, while limiting affordability impacts to residents, by 2050. In the modelling work accompanying the TransformTO program, the number of residential retrofits required over the period between now and 2030 to achieve these targets is about 24,000 units per year [8].
There are two challenges in achieving the kinds of results mandated in the City of Toronto and similar climate plans: achieving the scale of activity required; and ensuring that that activity actually achieves the required savings or performance. On the question of scale, we need to increase, by one or two orders of magnitude, the proportion of new buildings and of retrofits. This will be a major task, as these proportions are currently modest in most jurisdictions.
On the question of performance, there is a growing literature about the existence and causes of a significant gap between the predicted and actual performance of new and retrofit buildings (usually defined in terms of energy use), reviewed in detail in Section 3.1 below. Here, we simply wish to highlight the critical importance of reliable and informative building performance assessments if we are to address that gap and make significant progress in achieving our climate goals.
This paper concerns itself with performance assessment, though we believe our arguments also have implications for the problem of scaling up the number of new builds and retrofits that adopt ambitious sustainability goals. We propose a two-pronged approach to address building performance issues.
The first part of our proposed approach to performance assessment involves reframing the goals of building performance. We reframe these goals in terms of broadening the focus of performance assessment: (i) to move beyond a sole emphasis on energy performance to a broader sustainability focus, recognizing the equal importance of both human and environmental outcomes; (ii) to move beyond net zero to net positive approaches; and (iii) to move beyond a focus on individual buildings, by incorporating neighbourhood-scale built environment systems. (While we consider the third, neighbourhood-scale component of our proposed reframing essential, it will not be explored in this paper).
The second part of our proposed approach follows from this reframing, and involves integrating the qualitative and quantitative dimensions of performance assessment over time in order to address both human and environmental outcomes.
This dual approach—reframing the goals of building performance, and integrating the environmental and human dimensions of performance assessment over time—gives rise to a simple conceptual framework, which is described below in Section 3. The conceptual framework then allows the identification of three performance gaps, which we further discuss in Section 3 as well:
  • Predicted versus actual resource use (e.g., modelled and measured energy, water consumption);
  • Expectations regarding the performance of sustainable buildings versus the actual lived experience of the building occupants (e.g., pre- and post-occupancy evaluations);
  • Measured performance versus lived experience (e.g., thermal comfort measurements and survey results).

2. Reframing the Performance of the Built Environment: A Regenerative Sustainability Approach

Performance assessment has typically been approached by the building industry in terms of the goals of minimizing the negative environmental impacts of the buildings (e.g., by reducing energy use or emissions), and to a much lesser degree, of improving the comfort and wellbeing of building occupants. Where these two goals are pursued on a single project, they have tended to be addressed somewhat separately, with the former being addressed through modeling and the quantitative measurement of environmental performance, and the latter having mainly had to do with the qualitative post-occupancy assessment of the views of building occupants.
Both the public identity and the marketing of high performance buildings has strongly centred on environmental measures (e.g., ‘green buildings’), emphasizing reducing their impacts (e.g., ‘net zero energy’ or ‘near zero emission’ buildings). The human performance side is rarely emphasized in these materials, and the expressed goal is to avoid negative environmental impacts.
To date, this approach has not been very successful: although the numbers are growing, green buildings remain a small fraction of new builds and retrofits in both the Residential and Commercial sectors. For example, as of 2014, the number of LEED-certified buildings across building types amount to 10% of new buildings in Canada, up from 0.8% from 2004–2009 [9].
What might be more useful is to modify both approaches as follows:
  • The initial focus should be on human wellbeing (e.g., health, productivity and happiness)
  • To the extent possible, buildings should be designed to be net positive in both human and environmental terms.
We call this approach regenerative sustainability [10].
While adopting a net positive approach to building design raises a number of conceptual and practical difficulties [10,11], it is starting to be applied in practice [12,13]. The Living Building Challenge of the International Living Future Institute [14] is an example of a building certification scheme based on net positive principles. One regenerative sustainability building that has been the subject of much study is the Centre for Interactive Sustainability at the University of British Columbia in Canada. Research indicates that the net positive environmental goals are less easily achieved than the human goals [15,16,17]. A crucial consideration is that regenerative approaches are systems-based and are characterized by inherently unpredictable emergent properties, thus exhibiting levels of complexity that are difficult to measure and incorporate in practice. This suggests a need to emphasize process outcomes over performance outcomes [10].
A critical advantage of a regenerative sustainability approach to the built environment is that many of the design strategies that address human wellbeing (e.g., natural light, air quality, thermal comfort, natural materials) are essentially the same strategies that deliver environmental performance and climate goals. In other words, a focus on human wellbeing brings many aspects of environmental performance along for the ride. A critical question becomes where the two sets of goals overlap and reinforce each other, where they are independent, and where they might be in conflict.
A second advantage of this approach is that a focus on improving both human and environmental wellbeing is much more interesting to purchasers, leasers, developers and perhaps even designers than simply reducing damage.
For example, while much literature on the advantages of sustainable buildings focuses on potential energy savings, labour costs per square metre of office buildings are much higher than the energy cost per square metre. As a result, the economic savings from a small improvement in labour productivity for most occupants of office buildings would negate the savings from even significantly improved energy efficiency. Since labour costs are a familiar and important part of the economic calculus of virtually all companies, if such productivity improvements could be reliably calculated, the resultant savings would likely be much more influential in determining office space lease rates than less familiar and smaller energy savings. More generally, a building that will make people healthier, happier and more productive, is likely to be more marketable than one that simply does less environmental damage than typical buildings. However, as discussed in more detail below, this depends critically on developing strong metrics of human wellbeing that can be tied to conditions in the built environment.
The combination of these two factors: (i) broadening beyond environmental performance to a wider focus on both human and environmental wellbeing; and (ii) moving beyond harm reduction and net zero goals to net positive approaches, offers the potential for developing a regenerative sustainability approach that may contribute to a sustainable building industry that can be much more successful than in the past in achieving the scale and effectiveness required to meet society’s ever-more-ambitious sustainability goals.

3. Proposed Integration of Quantitative and Qualitative Performance Assessment over Time

In order to achieve the potential advantages of a regenerative sustainability approach to the built environment, we need to develop systems of performance assessment that allow us to evaluate the potential for net positive outcomes in both human and environmental terms. We propose an approach to built environment performance assessment based on the following simple conceptual framework.
The four quadrants of Figure 1 show the kinds of analysis typically undertaken for new or retrofitted buildings. The top two quadrants are the realm of mostly quantitative analysis, typically of environmental systems such as energy or water, through instrumented measurements. At the design stage, use is frequently made of building modeling processes that predict the expected performance of the building, or the retrofit, while once the building is built, monitoring processes provide assessments of actual performance. The gap between the two is the well-known performance gap referenced above.
In contrast to environmental performance assessment, the assessment of human systems in buildings is usually conducted qualitatively (bottom two quadrants), through methods focused on occupants using input and feedback methods that include surveys and interviews. It is much less frequent, consisting mainly of episodic post-occupancy or post-retrofit evaluations.
A Post-Occupancy Evaluation (POE) provides information on how a building functions once it’s been built, in terms of if and how the building meets stakeholder and user expectations, often with regards to user satisfaction with the building environment [18]. There is substantial literature about POE and POE cases, and researchers have developed a range of strategies for their use [19,20,21,22].
In retrofits, Pre-OEs are not uncommon [23], but for new builds, Pre-OEs are not typically carried out, meaning that there is usually no baseline for post-occupancy evaluations. Moreover, the quantitative performance of environmental systems is rarely compared to occupant experience of these systems, e.g., measured air quality and experienced air quality (For two recent exceptions, see Chang and Touchie (2017) and Touchie et al. (2016)). However, there is growing interest in these, with some published studies [24,25,26,27,28,29]. Researchers have acknowledged the need for pre-occupancy evaluations in health care environments, for example, as there is an increasing focus on evidence-based design and a desire for more scientific bases for design decisions [30]. In their manifesto for “building human agency”, Cole, Brown and MacKay propose a directive that addresses pre- and post-occupancy evaluations: “Pre- and post- occupancy evaluations in new and existing buildings should become mandatory steps within the integrated design process to accelerate our understanding of the systemic inhabitants-architecture interactions” [31]. Section 3.2 discusses some relevant survey and interview Pre-OE and POE methods and questions in more depth.
Figure 1 provides a basis for articulating three performance gaps in the built environment, introduced above and indicated by circled letters on the diagram:
  • Prediction Gap: Predicted versus actual resource use (e.g., modelled and measured energy, water consumption);
  • Expectations Gap: Expectations regarding the performance of sustainable buildings versus the actual lived experience of the building occupants (e.g., pre- and post-occupancy evaluations);
  • Outcomes Gap: Measured performance versus lived experience (e.g., thermal comfort measurements and survey results).
The following sections of this paper discuss these gaps in more detail.

3.1. Prediction Gap: Predicted vs. Actual Performance

The most well-documented performance gap in sustainable buildings is the quantitative discrepancy between the predicted and the actual performance of Environmental Systems such as energy, water, or carbon emissions. We are calling this discrepancy the Prediction Gap, and distinguish it from the other two Gaps that we explore below: it is typically measured in terms of energy use, and it is straightforward to quantitatively measure and compare calculations at the design or pre-retrofit stage to energy bills or measured performance on site.
Many published studies have shown that sustainable buildings do not perform as expected [32,33,34]. Typically, they perform worse than expected, but sometimes they do better than their targets for designed energy use intensity [35]. Newsham et al. [36] found in a re-analysis of LEED certified office buildings that, generally, they perform better than non-LEED certified buildings, but that at an individual level results varied widely between designed and measured performance. In a study of 66 Canadian university buildings, Storey [37] found no correlation between LEED certification and energy performance. And in a study of nine high performing buildings in Canada, Bartlett et al. [38] found that three of the nine had actual energy use significantly higher than predicted.

3.1.1. Assessing the Prediction Gap

This section outlines findings from recent studies of the cause of the Prediction Gap, in particular modeling inaccuracies, assumptions about user behaviour, organizational and disciplinary silos and lack of comprehensive studies.
Modelling inaccuracy and, therefore, incorrect building assumptions are often blamed for the Prediction Gap. Common reasons are:
  • simulation tools tend to be incomplete in their representation of energy loads related to specific areas or systems in the building, especially when a project uses newer design strategies such as natural ventilation or advanced renewable energy and water systems which present challenges for modeling software;
  • energy models struggle to account for the actual/future usage of a building and seldom account for occupancy schedules and levels;
  • accurate representations of occupant behaviour are not part of the typical modelling practices;
  • weather files provide historic and therefore inaccurate weather data used in the simulations [39]; and
  • studies have shown design-assist and compliance energy models prepared at the design stage are rarely verified or calibrated through as-built models, which provide predictions of energy performance based on what is actually built [16] (pp. 12–14).
Another source of modeling inaccuracy is the incorporation of rated equipment efficiency rather than system/plant efficiency and operating strategies [40]. Bartlett et al. [38] cite the above, as well as quality issues, occupancy changes, commissioning, and operational issues, which can lead to additional costs for building owners, reduced occupant productivity, and buildings that fail to live up to their potential. Also, there is a need to properly model the system components and their control algorithms. To do this, the design must incorporate real performance curves, not just single point efficiencies. The perceived failure of a building due to these discrepancies has been termed “the credibility gap”, and it has created cynicism in the building industry, where people find it hard to ‘trust’ green design [41].
A special issue of BRI Journal (2018) dedicated to the energy performance gap and its causes suggests the importance of improving behavioural assumptions in energy modeling and simulation, and delves further into other causes of the gap relating to occupant and other stakeholder practices [42]. Studies showed that occupants are a more diverse group than assumed [43], and similarly, that data on occupancy should be grouped by age, income and other variables, rather than considering users as a homogenous group [44].
In a similar vein, studies have shown that people use their buildings differently, even in the case of studies comparing similar physical spaces, making accurate predictions is difficult. For example, Gram-Hanssen [45] analyzed quantitative and qualitative data from different households living in similar homes in a suburb of Copenhagen and found that energy consumption due to usage patterns varied significantly, with some families using three times the energy of another family in a similar house. As a result, a challenge to better understanding the gap between predicted versus actual performance is in better identifying qualities and characteristics of the building users or occupants: to know how to design for what people want and will do, the users and their wishes must be known [46].
Organizational structures and the typical disciplinary silos of the building process also contribute to the Prediction Gap. In such cases, the gap arises from institutional and cultural barriers in communication. Fedoruk et al. [15] found that institutional issues arise from the way the various stages of the building life cycle were specified, contracted and implemented. These had the greatest impact on the discrepancy between anticipated and achieved building energy performance.
Another important issue is the nature of disciplinary roles on a typical project, where the person making the assumptions that define building performance goals is not the same person carrying out ongoing monitoring or taking lessons from this project to another one. This illustrates the need to have a commissioning agent on the project team from the beginning. Fedoruk et al. emphasized ”the importance of having meaningful and effective building energy monitoring capabilities, an understanding of energy system boundaries in design and analysis, crossing the gaps between different stages of a building life cycle, and feedback processes throughput design and operation” [15] (p. 752). To this we would add the importance of an integrated design process, as well as an integrated approach to project delivery that considers the whole building life cycle throughout design, construction, commissioning and operation. “Beyond technical considerations or simply injecting new information, a rethink is required of how buildings are planned, designed, constructed, commissioned and operated in order to close the performance gap” [15] (p. 751).
There is a need for more detailed studies of the Prediction Gap that examine a range of data and causes. A good example is a recent study of nine Canadian buildings that provides important findings that compare their predicted performance based on design stage modeling and green rating submissions with the actual building performance over two years using metered data for energy and water from the utility bills and submeters. The energy and water data were also compared to benchmarks for typical performance of similar buildings. In each of the buildings there was a significant gap between the measured and predicted performance of at least one of the systems being examined, and the researchers gained important insights into the difficulties of resolving the performance gap [38,47]. An unusual feature of this study was that it compared benchmarks of similar buildings, metered data, spot site measurements, interview data with the design team and building manager and occupant survey data.
This comprehensive approach found that rigorous reconciliation between projected and actual performance was not considered possible because it would mean revising performance projections made at the design stage to reflect actual building use and occupancy (for example, if more people used the building than initially assumed, predictions would need to be revised), and they found that in the nine projects, only one of the building models had an energy model that had been recalibrated [38]. Their findings related to the nature of the disciplinary silos, distinct workflows and timetables, lack of funding, and a lack of interaction, and confirmed many of the reasons for the Prediction Gap cited above. The lack of model recalibration means that within the current way of working, design stage assumptions are not corrected. The usual workflows and outputs are not designed to be paired with findings from after the building is built and inhabited. This leads to obvious difficulties in comparing and evaluating the success of building performance. Consistency in what is being evaluated, and communication about what data is collected between design stages and occupancy stages of a building’s life span are important considerations for this gap.

3.1.2. The Future of the Prediction Gap

The energy performance gap is the easiest to define and measure, and yet it remains an urgent problem. Buildings are not performing as expected, and energy policies and building regulations are not adequately acknowledging and planning for this gap. The Prediction Gap has been studied in a number of disciplines, and many researchers see this gap as a problem relating to assumed building characteristics that can be solved by better modeling. However as noted above, a number of new studies argue that importance must be placed on understanding the nuances of institutional rules governing building design, construction and commissioning, occupant behaviour and buildings in use.
Future research relating to the Prediction Gap will need to acknowledge and delve more deeply into fundamental questions about the barriers to better understand this gap. There is a need to critically reflect upon what kinds of tools and methods we use to collect building performance data, what kinds of data to collect, who collects this data and how is it interpreted.
Relevant to the challenges of this gap are what we have termed the Prediction Paradox. The Paradox lies in the issue of how to predict building performance accurately, and also, in how to interpret results. If we want to better understand a building’s overall performance, the issues described above mean we cannot make accurate predictions; in fact, we argue that more information will not improve predictive accuracy of building performance as a whole. This is an inherent result of the complexity and resultant emergent properties of socio-technical systems like buildings, not of lack of knowledge. We can predict accurately only at the component level (e.g., heat loss through a specific material), where physical performance is well understood.
Much of the literature on what we are calling the Prediction Gap focuses on ways to narrow or eliminate it. However, due to the challenges of accurately predicting building performance outlined above, we argue that it is undesirable to attempt to resolve the complexities of building performance in terms of a single prediction for quantitative aspects such as energy performance.
Instead, we propose supplementing predictive approaches by moving in the direction of scenario analysis [48,49] and backcasting techniques [50,51] that focus on the range of possible outcomes, and on the ways that desirable outcomes could be approached. Developing alternative scenarios of performance based on different assumptions about behavioural and institutional issues would allow us to get a sense of the range of potential performance outcomes. At the same time, backcasting from desirable outcomes may help to guide the design process by focussing attention on those design strategies that work best to achieve design goals across the range of scenarios.
In this way, if the process of predicting performance is understood to be inherently uncertain, the design team can innovate within the predictive space by use of scenario analysis and backcasting. There are obvious challenges to instituting such processes for design teams and clients who are used to working towards specific performance targets. However, scenario analysis may present a good supplemental activity, as it can be seen as falling within the realm of conceptual design where various built forms and strategies are discussed, iterated, and analysed.

3.2. Expectations Gap: Official Story vs. Lived Experience

Gap B is the qualitative, social analogue to the quantitative performance gap discussed above, and occupies the Human Systems bottom half of Figure 1, noted as ‘B’: it is often present when comparing Pre-Occupancy and Post-Occupancy Evaluation results. The Expectations Gap in the context of occupancy assessment is defined by Coleman and Robinson [17] as the gap in occupant expectations: it lies between what occupants expect and what they experience in a building, and is expressed in qualitative feedback through survey and/or interview. The gap, summarized here, therefore describes differences between an assumed or predicted story about building performance as perceived and shared by occupants (based on prior experience, and sometimes influenced by information available about the building) and the dynamic, ongoing lived experience of it [17].
In considering this gap, the role of people in buildings, and their assessment of the building through survey and/or interview, becomes of central importance. Cole et al. [52] argued for the necessity of involved occupants, in the aim of re-contextualizing comfort requirements under climate change goals and building energy use [3]. To enable this re-contextualization, occupant opinions and experiences are ideally sought and valued in building design and operations, allowing feedback and feedforward to design stakeholders [17,52,53,54,55,56].
Outside of ratings of comfort, which can be quantified and analysed statistically, themes and stories can be discerned from occupant commentary through content analysis and interpretation. Drawn out by survey and/or interview questions, once coded and interpreted [57,58,59,60,61], these stories can illuminate how the building is, how it should have been, or that it has not yet become a space that enables health, well-being and productivity, for example.
The evidence for a qualitative performance gap can be seen in the form of disappointment and backlash, (including claims of greenwashing in the case of a green or regenerative building) [17]. Thus, there are two levels at which the qualitative performance gap has potential negative consequences: at the level of occupants, in which disappointment about building performance may affect their perception of their employer or the sustainable building movement as a whole; and at the level of designers, who rely on the good reputation of precedent buildings for continued success.
A particular instance of the Expectations Gap is found in highly sustainable buildings [62], where stakeholders’ expectations (designers, clients, and those with an invested interest in the building’s performance) become codified in promotional material, which then influences occupant expectations, a finding supported by other researchers [53,54]. Conceived of as ‘bids’ for social or normative alignment [63], occupant stories about the qualities of built space constitute a powerful socially binding force, and have implications for the normalization of current and future building design [17,52,62].
The characterization and content of stories or narratives about building energy efficiency has been discussed in policy literature [64,65,66], in which the concern is to bridge and close the performance gap. However, Coleman and Robinson argued that it may be more fruitful to understand this qualitative gap as a generative space for a re-making of the building performance story as a whole [17]. In doing so, the aim is to understand the building context as a continuously adaptable, and therefore, potentially optimizable setting, requiring ongoing feedback from occupants [55].
In other words, the very existence of the Expectations Gap calls for interactive adaptivity, a concept developed by Cole et al. [52], which asserts that building performance must take into account more than just the individual and ecological factors; performance must include broader societal circumstances, over time. Interactive adaptivity integrates “[a] dynamic and complex building system with a participatory process, interactivity between inhabitants, and between inhabitants and building elements”, and allows adaptation “to changing conditions (e.g., seasonal temperature change, or, on a larger scale, global climate change), resulting in a fluid but robust design that is responsive to social, ecological, and economic conditions over time.” [52] (p. 333). In effect, the building engages in a conversation with its occupants, leading to mutual adaptation over time.
The Jim Pattison Pavilion in an iiSBE case study [67] provides a simple but non-trivial example of interactive adaptivity: windows fitted with red and green lights guide the occupant to open or close windows in order to maximize cooling. In turn, these lights constitute a tool for awareness of energy consumption, and the process as a whole engenders continual optimization. The Comfy app (www.comfy.com) provides another example in which building response can adapt based on occupant feedback on temperature, lighting, room bookings and other aspects, thereby centering on the occupant experience with both building systems and occupants continuously learning from it.

3.2.1. Assessing the Expectations Gap

Exploring the Expectations Gap relies on the collection of at least one source of data, taken from Pre- Occupancy and Post-Occupancy Evaluations, or pre-retrofit and post-retrofit studies. A Pre-occupancy evaluation (Pre-OE) or Pre-retrofit evaluation is an evaluation used as a baseline against which future evaluations can be compared. Usually, the same survey is conducted in the Pre-OE and POE so that data can be statistically compared. Responses in the Pre-OE and POE, with identical or different populations, can be compared in the sense that the data depicts responses to the “before” and “after” moving or retrofit conditions.
Some assessment studies have used a control group, in the form of a sub-population of individuals that remain behind in the prior building: both groups (those remaining behind, and those moving) are surveyed with the Pre-OE, and later the POE, at the same time [29,68].
The formulation of control populations is challenging, however, since building typologies and programming, compounded by differing building tenants and populations, constitute incredibly diverse ‘entities’ with many variables affecting evaluation outcomes. Paired matches (the same individual is surveyed before and after the move or retrofit) may therefore provide a better way to control for effects, which we assume are perceived more consistently by a single individual, than they are perceived between different individuals.
Yet, the goal of surveying pre- and post-occupancy matched pair individuals is itself hampered by attrition amplified by the sometimes lengthy amount of time between pre- and post-occupancy, as well as the fact that it is often not known what tenant, let alone individuals, are moving into a building. In this case, an unmatched control (different individuals in groups before and after the move) is assumed to be better than none.
The question of when to conduct the pre- and post-evaluations is also significant, and depends on the aim of evaluation. If the desire is to capture deficiencies or the immediate impact of a retrofit, then earlier in occupancy of the new building or retrofit is best; and if it is to understand how the building performs under optimized conditions, then later in occupancy is best. The first 6–12 months of an occupant’s experience in a new build or retrofit is a sensitive adjustment period for both occupants and building operators. At this time, complaints are high and new habits and routines are being established by all. For the aim of identifying and fixing deficiencies, this is a good period to run a POE, but of course, it will likely turn up the largest Expectations Gap, which also may be highly transitory depending on how soon the deficiencies are rectified. For later POEs, Paevere and Brown found that the case study green building (CH2) was still being fine-tuned after one year [69], while McCunn and Gifford suggest that 4 years may not be long enough for the positive effects of a green building to be felt [70]. As such, building operators would know best when it is an optimal time to run a POE, or at least, they would be able to flag ongoing deficiencies during analysis. The season that an assessment is run is also significant, since if the pre- and post-occupancy evaluations are to be matched with as little variability as possible, then ideally, the seasons are matched as well, meaning that at least a year’s gap between the two evaluations is necessary.
In terms of requesting qualitative data, questions in the Pre-OE survey and/or interview [59,71] can be used to query occupant expectation of future conditions, benefits, and features, such as: anticipated IEQ performance, support of sustainable activities, the expected impact of building conditions and features on productivity, well-being and health, and so on. In the absence of interviews, a section for open comments in surveys usually elicits illuminating and unexpected feedback. As [72] indicates, surveys are often used as repositories for complaints; further, complaints are useful to investigate because they often indicate a change in what is normal and expected [62].
This data is then analysed through coding, categorization, distillation and interpretation [57,58,60,61] for themes that build into narratives (software like NVivo can be useful for this purpose), with outliers taken into consideration. Thus, pre- and post-commentary is analysed and coded for themes (e.g., the expression of skepticism, a feeling of forgiveness, the sense of tribe) and narratives (e.g., “I heard about the user interface we’d get and I was annoyed when we didn’t, but it’s not really a big deal—work is easier now because all my colleagues are all here”), and the before and after stories are then compared for similarity and difference.
If data that can be analysed more quantitatively is desired, questions that directly ask about expectations before the move or retrofit can also provide a picture of difference or similarity in expectations, before and after a retrofit or move. For example, in the Pre-OE, a set of Likert scale questions could ask, “Rate your level of agreement with the statement: ‘I expect that air quality will be excellent at all times.’” Following up on the POE after the move or retrofit, the mirroring question would ask, “Rate your level of agreement with the statement, ‘My expectations were met regarding air quality.’” This data can be statistically analysed, and it can also be considered alongside the prevalent themes, narratives, and outliers produced by the analysis of interview questions and survey commentary, for further informal comparison.
Traditional POE data (e.g., Likert scale data on workplace and IEQ satisfactions) can also be inspected for how it connects to, supports or contradicts the data on expectations, or commentary before and after.
Interpretations can be further connected to other data where expectations are unmet (e.g., in the Pre-OE, an occupant thought there would be art installed; in the POE, it turns out that the lack of it is perceived to hamper well-being, thereby connecting the two), or where expectations are exceeded (the occupant expected plentiful fresh air; they experienced more fresh air than expected, and this is perceived to directly enhance health). In all cases, a coding session of the same qualitative data could be conducted independently by another researcher to corroborate interpretations [57].
As mentioned, adding another layer of complexity to the assessment of the Expectations Gap is the particular case of highly novel, innovative, and/or green buildings, which are usually touted for their difference in publicly-available and distributed promotional materials. According to Coleman and Robinson, for buildings that have been publicly marketed the promotional material itself (brochures, building tour script, external and internal signage, online building manual, media coverage, etc.) will significantly influence occupant expectations. These materials can be collected and analysed as well, and interpreted to constitute a collective “Official Story” [17]. The same authors found that the Official Story derived from promotional materials and a media analysis was clearly reflected in occupant stories about the building [17,62]. In terms of analysis of the Expectations Gap, in the case of highly promoted buildings, designer and stakeholder aspirations may dominate occupant expectations, and the lived experience assessed in the POE is then positioned as a reaction to that Official Story [17].

3.2.2. The Future of the Expectations Gap

The Expectations Gap exists in green building performance as it does in other fields, because the complexities of reality—often of an institutional and social nature—intrude upon expectations. However, the Expectations Gap is unique in that it relies entirely on subjective qualitative data, which is based on the interplay, feedback and feedforward between prior, and later, occupant experience. When the stakeholders’ collective vision for the building is available at or even before moving in, the occupant experience is ultimately an evaluation of that vision.
It is worth noting that in the special case of sustainable buildings where promotional Official Stories influence occupant expectations, this promotion is almost always focused on innovative and novel building features, rather than on occupant social and well-being-related opportunities. This is significant because researchers [17,73] have shown that occupants are more readily disappointed with the anticipated physical benefits of a building than they are with the intangible benefits (such as social and creative opportunities, well-being and productivity, etc.) afforded by a building context. As a result, we suggest that the existence of a significant Expectations Gap in a sustainable building indicates that building stakeholders should focus more on promoting the social and productive aspects of inhabitation, rather than on innovative building features [17,53].
This focus on the role of the occupant provides a rationale for their involvement in design and operations, as is suggested by the Soft Landings approach [56] and a variety of other authors [55,74,75]. Cole et al. noted that building occupants may be considered ‘inhabitants’ when they play an active role in the maintenance and performance of their buildings, as opposed to ‘occupants’, who are passive recipients of pre-determined comfort conditions [52].
The logical complement to the involvement of engaged occupants is a continuous form of POE, allowing continuous feedback, feedforward and optimization of the building environment in tandem with occupants’ satisfaction, well-being, and productivity. This is furthermore in keeping with a regenerative approach, seeking to balance human and environmental well-being, and ultimately provides interactive adaptivity [52,53], allowing for an active process of mutual accommodation between the building and its occupants.
The concept of forgiveness illustrates the opportunity of the recommended notion of interactive adaptivity. A phenomenon noted by various building researchers, occupants of green buildings appear to be more forgiving of conditions affecting their comfort than are occupants of conventional buildings [53,72,76,77,78,79]. (Leaman and Bordass calculate a ‘forgiveness factor’ as the ratio of overall comfort, to the mean of the individual indoor environmental quality (IEQ) variables scores. The factor ranges from 0.8-1.0. A factor higher than 1.0 indicates more tolerant occupants, who are willing to tolerate insufficient conditions in spite of expressed dissatisfactions (Leaman and Bordass, 2007)). Importantly, greater forgiveness is also associated with greater occupant control and feedback [62,77,80]. The message is that a green building, with greater control and feedback allowances for occupants, is more likely to be forgiven its failings. At the very least, adaptation through interactivity, in the gap between what was expected and what was delivered, allows for new expectations [17].
Last, a general challenge related to instituting interactive adaptivity and feedback processes entails a change in the ecosystem of building design and facilities management, for occupants, managers, designers, and clients. These changes have significant implications for the required skills and training of building operators. Further, occupants would be faced with typically unfamiliar expectations of engagement, and building and construction stakeholders would need to learn how to work with negative feedback (which must be seen as part and parcel of instituting dialogue in a regenerative sustainability context).

3.3. Outcomes Gap: Measured vs. Experienced

The Outcomes Gap C describes the difference between measured performance and the lived experience—between Environmental and Human System data. Specifically, this measured performance typically relates to indoor environmental quality parameters that are indicators of visual, acoustical and thermal comfort and indoor air quality. These parameters may include light levels, air temperature, mean radiant temperature, relative humidity, and carbon dioxide concentration, among others. The measured data is often compared to standards or accepted models which are used to determine if the measurements indicate a satisfactory environment. Data on the lived experience is typically captured through occupant surveys or interviews, both structured and unstructured. Guerra-Santina and Aidan [81] provide a comprehensive overview of data collection methods to evaluate building performance energy and thermal comfort using both qualitative and quantitative methods. Then, these results can be compared to determine if the monitored conditions and the reported conditions agree with one another. Examination of the Outcomes Gap is a critical first step to contextualizing the sometimes misleading quantitative metrics like energy performance or temperature and getting to the root whether the building occupants are satisfied or not and, perhaps more importantly, why they perceive the building in the way they do [82].
Numerous studies have gathered both Environmental and Human System data that have been used to examine the Outcomes Gap. Studies related to this gap are typically conducted in office environments and have largely focused on issues of personal environmental control and perceptions versus measured conditions relating to daylight and ventilation. While all of the studies presented here involved the collection and analysis of both qualitative and quantitative data, only some studies have collected these various data types attempting to correlate qualitative responses to the quantitative monitoring data in various ways, either directly for certain parameters like lighting levels, or indirectly, through translational models, such as in the case of thermal comfort.
For example, studies that simply examine both data types include Altomonte and Schiavon [83], Geng et al. [84] and Newsham et al. [85]. Altomonte and Schiavon [83] studied the difference between BREEAM and non-BREEAM office buildings using survey data on lighting, acoustic and thermal comfort which were collected while spot measurements of various IEQ parameters were collected at the respondent’s workstation. The findings related to lighting led the authors to hypothesize that personal control over lighting available in non-BREEAM buildings, which allowed occupants to intervene during periods of visual discomfort, resulting in greater satisfaction with the indoor environment and a tolerance for greater fluctuation in lighting levels. Geng et al. [84] compared environmental measurements, survey results and productivity tests under various air temperature conditions from 16 °C to 28 °C. Interestingly, they found that thermal dissatisfaction appeared to override awareness of other IEQ parameters like lighting, noise and IAQ, but when occupants were thermally comfortable, they became more aware of these parameters, specifically, noise and lighting, demonstrating the interconnectedness of the IEQ perceptions and primacy of thermal comfort. A laboratory-based study by Newsham et al. [85] used occupant surveys to evaluate occupant attitudes to having personal control over the changing environmental conditions that would occur during a demand response event. They found that the ability to control one’s lighting and ventilation levels both increased satisfaction and decreased energy use.
Examples of studies that attempted to find a correlation between survey responses and environmental measurements include Leder et al. [86] and Choi et al. [87]. A field-based study by Leder et al. [86] measured perceived conditions versus measurements of IEQ parameters in an office space. Stepwise regression was used to compare field measurements of environmental conditions at individual workstations to simultaneous online questionnaires about the respondent’s environment to determine which conditions most impacted occupant satisfaction. The study findings support the idea that green buildings can provide better perceived indoor environmental quality compared with conventional buildings. Choi et al. [87] compared 15-min IEQ measurements at various workstations in the perimeter and interior of an office with survey data yielding numerical ratings of various IEQ parameters. They found that IEQ guidelines did not necessarily yield comfortable conditions, and thus, made a series of recommendations about temperature, lighting and air flow rates to improve occupant satisfaction.
This type of research is less common in the residential sector, but a handful of studies have begun to examine the Outcomes Gap in this sector. Similarly, in the residential sector, some studies only examine these two datasets, such as Liu et al. [23] and Dascalaki and Sermpetzoglou [88]. For example, Liu et al. (2015) examined the performance of two Swedish apartment buildings: one that had undergone an energy retrofit and one that had not. They used indoor temperature data and energy data to calibrate a building energy model which was then used to generate PMV and PPD data based on the model described in ASHRAE Standard 55. These pre- and post-retrofit modeled thermal comfort indicators were compared with reported thermal comfort issues from pre-retrofit and post-retrofit occupant surveys. A correlation was found, but not quantified given the different metrics output from the model and the surveys. The surveys also found improvements in many other areas of indoor environmental quality, including noise and air quality, but site measurements for these parameters were not collected for comparison with these perceived improvements. A study of Hellenic schools included IEQ measurements for one week as well as occupant surveys of teachers and some pupils [88]. Approximately 60% of the time the monitored conditions were considered unacceptable relative to standards such as ASHRAE. The survey data, which included a numerical rating of various IEQ parameters, was reported separately without a comparison to the monitored data. The most frequent survey complaints related to insufficient ventilation, noise disturbance, glare and thermal discomfort.
Only one residential study directly examined the gap between the qualitative and quantitative data. A three-year study of social housing buildings in Toronto showed a significant difference in the resident-reported thermal comfort and the calculated thermal comfort levels based on ASHRAE Standard 55 and in-situ environmental measurements in both summer and winter [89,90]. Generally, occupants were less satisfied with wintertime conditions than the comfort standard would suggest, but interestingly, the particular combination of the building type (e.g., low or high rise) and occupancy type (e.g., senior or family) appeared to have an impact on the level of agreement between reported and calculated thermal comfort. The mid-rise buildings (which had a combination of senior and single occupancy types) saw the closest agreement between measured and perceived thermal comfort. While the measured data collected from high-rise buildings occupied by families suggested a higher level of comfort than the survey responses indicated, the data collected from the low-rise buildings occupied by seniors suggested a lower level of comfort, despite occupants reporting satisfactory conditions.
There are some obvious limitations to comparing monitored data with results from a survey. Comparing qualitative survey responses with quantitative parameter measurements leads to questions about what constitutes an agreement between these two data sets. Are we measuring the right parameters and asking the right questions to allow for a direct comparison between these qualitative and quantitative data? How do we compare continuously monitored parameter data to episodic survey data? Furthermore, should average or extreme conditions be compared? The next section explores these aspects related to assessing the Outcomes Gap.

3.3.1. Assessing the Outcomes Gap

(1) Comparison of qualitative and quantitative data
Of the three performance gaps, the Outcomes gap is perhaps the most challenging to assess and quantify, primarily because different metrics must be compared directly. The first two performance gaps involve comparisons of the same data type (e.g., degrees Celsius or ekWh/m2 for the Prediction Gap or the results of a ranking survey question for the Expectations Gap). To assess the Outcomes Gap, on the other hand, survey results must be compared with monitored data.
This ‘translation’ of data to enable the comparison between these two data types often occurs via an existing model. The model serves as a means of converting one data set into a form where it can be directly compared with the other. The challenge with this comparison, and a likely contribution to the existence of the Outcomes Gap, lies in the model assumptions. While it may be possible to reduce the influence of inaccurate assumptions by gathering more performance data, there is a practical limit to the number, type and placement of sensors in occupied spaces. Furthermore, there is a question as to whether we are even measuring the correct parameters to assess occupant satisfaction.
For example, the thermal comfort model in ASHRAE Standard 55: Thermal Environmental Conditions for Human Occupancy includes inputs such as air temperature, mean radiant temperature, relative humidity, air velocity, clothing level and metabolic rate to determine whether the majority of occupants would find the particular set of conditions comfortable. These values are used to determine the Predicted Mean Vote (PMV) or the Predicted Percent Dissatisfied (PPD), which can be directly compared to the results from a survey question which asks the occupant to rate their thermal comfort on a Likert scale of the ASHRAE Thermal Comfort Index, or state whether they are satisfied or dissatisfied with their thermal environment. The empirical relationship between these model inputs and outputs may be quite strong when the inputs are fully characterized at the occupant level in a laboratory setting and the testing is conducted on a sufficiently large sample. However, gathering the model inputs in an actual building is significantly more challenging. This can be seen by examining how we would collect data to input into a thermal comfort model in order to allow for a comparison with the survey data.
Devices such as smart thermostats and sensors connected to building automation systems can easily collect data on air temperature and relative humidity; however, both of these parameters are collected locally at the sensor location and do not necessarily reflect the variation in these conditions throughout the zone of interest, and specifically, where the occupant is currently located. Data on the other parameters are significantly more challenging to collect. Mean radiant temperature (MRT) can be measured using a globe thermometer located in the geometric centre of the zone, which is obviously impractical for an occupied space, and only reflects MRT at the location of the thermometer. Alternatively, infrared imaging can be used to determine interior surface temperatures, and then shape factors can be used to determine the resulting mean radiant field at the location of the occupant.
Once again, the imaging and shape factor calculations are impractical for long term monitoring as all surface temperatures in the zone must be collected and the occupant(s) location must be known. To characterize draught sensation and air speed throughout a zone, a matrix of air velocity sensors distributed throughout the zone volume would be required, but this would be impossible to instrument in an occupied space. Clothing levels and metabolic rate depend on occupant preferences and activities. All of these parameters vary throughout the zone or from person to person and through the day and year. Therefore, based on current and commonly-available sensor technology, assumptions about many of these monitored parameters are often required to use these ‘translational’ models, such as those described in ASHRAE Standard 55.
Even if it were possible to collect all of these environmental data in an occupied space, some occupant perceptions and preferences will extend beyond the upper and lower acceptability limit as dictated by the model. So, while the model might provide an indication of the conditions under which most occupants would be satisfied, most current models cannot precisely predict how a given occupant will perceive certain interior conditions. Therefore, perhaps our efforts are better spent using survey data to interpret quantitative data collected through monitoring, rather than try to directly compare them.
(2) Comparison of continuous and episodic data collection
Inexpensive data storage means data on IEQ parameters can be collected at short intervals (e.g., every few minutes) for long monitoring periods (e.g., months or years); however, surveys and interviews are generally expensive and time consuming to conduct, and so, for a given study, are often only conducted at a single point in time or, at best, a handful of times throughout the study period. Even with surveys conducted via smartphone, there is a practical limit to the number of times one can survey a study participant before annoyance or impatience sets in. This difference in data collection frequency presents challenges for comparing the qualitative survey data and the quantitative monitored data.
These occupant surveys may reflect real-time observations or retrospective ones [82]; however, the ability to recollect their experiences and perceptions varies between individuals and the time span which they are asked to consider. For example, an occupant may easily be able to recall the quality of their experience in a space over the last 15 min or hour, but may find it more challenging to characterize this experience over a few months or a year. This can be a result of trying to recall past experiences under different current conditions or after significant passage of time (e.g., considering wintertime thermal comfort in response to a survey question posed during the middle of summer) or difficulty aggregating a range of conditions experienced (e.g., will the occupant report on the extreme periods of discomfort or their impressions of the average conditions in the space). The wording of survey questions, as well as the motivation of the occupant, can influence how these questions are interpreted and responded to.
With respect to the monitored data, we need to determine what time span should be used for comparison with the survey responses. This decision is relatively simple if the survey question asks an occupant to recall conditions over a 15-min period, but if queried about seasonal differences in terms of the level of satisfaction with the space, how should the monitored data be processed for comparison? One option is to examine the average or extreme values over the time period and compare these to the survey responses. Alternatively, a translational model can be used to determine if the conditions would be considered satisfactory over the given time period, and then the proportion of time with satisfactory conditions in the given time period can be compared to the survey response. Regardless, we must make assumptions about how the respondents are interpreting the questions before processing and comparing the data.

3.3.2. The Future of the Outcomes Gap

There are numerous challenges that must be addressed with respect to the Outcomes Gap. However, as with the Expectations Gap, instead of trying to close this gap, we may wish to use the existence of it as a creative resource to inform the development of new, more meaningful ways of thinking about performance, and thereby better performance indicators of occupant satisfaction. The critical question we must ask ourselves is which data type, Environmental System or Human System, is indicative of ‘real’ performance? In other words, we can measure indoor conditions and compare them to a standard, but if the occupants are not satisfied, does it really matter that the quantitative measurements meet the standard? This section discusses some of the challenges and opportunities related to collecting and comparing these two data types.
To improve occupant satisfaction, we can consider increasing the frequency of occupant feedback, allowing for adequate control over the space either centrally or individually in response to occupant discomfort and, perhaps most importantly, finding a way to communicate the consequences of this gap to building designers and owners.
With respect to data collection, as sensors become smaller and less expensive, there are a number of wearable options that might be able to provide real-time, local-to-the occupant environmental feedback to building control systems. Mobile devices, desktop apps or wireless polling stations throughout the building can provide easier, less-intrusive ways to collect real-time data on occupant perceptions of their space, and provide these data directly to building operators. Alternatively, image analytics of facial expressions from security footage or assessment of mood via social media could yield occupant feedback without requiring their explicit participation in the process. In all cases, establishing best practices around privacy considerations is a challenge not limited to methods of building performance assessment, and insight may be drawn from fields where monitoring is common.
Another challenge emerges where satisfying the desire for an individual to have control over their space may be at odds with the satisfaction of the larger group occupying the space, which must be carefully considered and addressed at the design stage. Furthermore, gathering and processing this individual versus group feedback must be managed by building operators in order to make use of these data. Applications, such as the Comfy App, that can aggregate feedback and provide guidance to operators, are necessary to quickly process these vast amounts of real time data to make the findings actionable. However, even with these advances in data collection and processing, we are often limited by the controls and zoning associated with our current heating, cooling, ventilation, lighting, window and blind control systems, just to name a few.
Finally, the findings from exploration of this gap must be communicated beyond academic circles. Best practices and common challenges should be incorporated into guidelines, standards and regulations so that minimization or elimination of this gap will be considered from even the earliest stages of schematic design. Only through consideration of this performance gap throughout the design, construction and operation of a building can we put occupant wellbeing first and foremost in our list of performance objectives for the built environment.

4. Conclusions

Achieving the kinds of climate change and sustainability goals increasingly adopted by various jurisdictions around the world will require unprecedented improvements in the performance of the built environment. A crucial component of achieving such goals will be the implementation and evaluation of performance assessment approaches that will allow us to evaluate the sustainability performance of the built environment in accurate and meaningful ways.
Adopting a regenerative approach to sustainability assessments suggests that such assessments must be able to evaluate both human and environmental performance, and also to assess the degree to which net positive outcomes in both of these areas have been achieved. This paper has provided a conceptual framework in terms of which net positive outcomes in both environmental and human terms can be assessed. The framework quadrants (Figure 1) indicate human and environmental system performance assessment activities, which include predicting building performance; using pre- and post-occupancy evaluation to understand occupant and possibly stakeholder expectations and actual experience; and measuring actual environmental performance. These activities taken as a whole, along with innovations on these activities as discussed above and summarized below, could provide the creative material for new pathways towards net positive design and sustainability.
Articulation of this framework led us to posit three important performance gaps:
  • Prediction Gap: Predicted versus actual resource use (e.g., modelled and measured energy, water consumption);
  • Expectations Gap: Expectations regarding the performance of sustainable buildings versus the actual lived experience of the building occupants (e.g., pre- and post-occupancy evaluations);
  • Outcomes Gap: Measured performance versus lived experience (e.g., thermal comfort measurements and survey results).
The Prediction Gap between the predicted and actual performance of environmental systems in the built environment is the best known of these performance gaps, and has been the subject of much research and scholarly discussion. However, in the main, the approach taken to this gap is to look for ways for it to be narrowed or eliminated. In other words, the goal has been to make the predictions of building performance more accurate.
In contrast to this approach, we identify a Prediction Paradox which suggests that because of the complex set of behavioral and institutional factors that give rise to the Prediction Gap, trying to achieve accurate predictions of actual whole building performance at the design stage is perhaps the wrong goal. Instead, we should supplement predictive analysis of building components and technologies with scenario analysis and backcasting approaches. These are intended to identify design strategies that may be resilient to the inherent uncertainty about how the building will actually perform once built and occupied. In this sense, the Prediction Gap can be seen as a creative resource to be explored and used to improve the resilience of design strategies.
There is increasing interest in the literature in the experience and behaviour of building occupants, and how this affects the performance of buildings in both human and environmental terms. The Expectations Gap is the difference between the expectations and the actual experience of building occupants. As suggested here and in the original analysis [17], it may be useful to think of this gap as a source of creative tension and opportunity for interplay between the building systems and the building occupants. Such ‘interactive adaptivity’ may enable passive building ‘occupants’ to instead become active ‘inhabitants’ of the building, with a sense of place in, and engagement with, the building itself. The Expectations Gap becomes the basis for a conversation between the building and its inhabitants, with a goal of improving conditions in both directions over time. As buildings become smarter, this conversation can be expected to become more meaningful, and potentially, much more effective.
The Outcomes Gap between the measured performance of the built environment and the human experience of it, is, in some ways, the most difficult to assess, combining both quantitative performance measurement and qualitative human responses. Moreover, the existence of such a gap raises an important philosophical question: what are the real or true performance measures of the built environment in question? Are they the measured values of performance, such as temperature or humidity, or the experiences of comfort and ease?
This philosophical question has important practical implications: do we try and bring the experience of the environment in line with the measured performance (e.g., use our understanding of people’s experience to help them better interpret the meaning of the measured data), or do we adapt performance measurements to better reflect actual experience? Our suggestion, finally, is to do both: use qualitative data to better interpret performance measurement, and also measure more experientially-meaningful outcomes. In so doing, we can explore the interplay between Environmental and Human Systems data.
For all three performance gaps, we propose using the existence of such gaps as creative resources, offering the potential to better understand how to achieve net positive sustainability outcomes in both human and environmental terms. If our built environment is to become significantly more sustainable, we need to harness the creative energy of designers, operators and building inhabitants, in developing new ways not only to design and build or retrofit our built environment, but also to engage in processes of continuous improvement over time. Treating the three performance gaps we have identified as evidence of opportunities to pursue such improvements is one important way to contribute to this goal.

Author Contributions

All authors contributed to the writing and editing of this manuscript, and each has approved the submitted version.

Funding

This research received no external funding.

Acknowledgments

We would like to thank the members of the Sustainable Built Environment Performance Assessment group at the University of Toronto for their feedback on drafts, particularly Fiona Miller, Bryan Karney and Frances Silverman. We would like to acknowledge the following University of Toronto Faculties for funding the Sustainable Built Environment Performance Assessment network: Faculty of Architecture, Landscape and Design; Faculty of Arts and Science; Faculty of Applied Science and Engineering; and The Dalla Lana School of Public Health.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Wilde, P.; Coley, D. The implications of a changing climate for buildings. Build. Environ. 2012, 55, 1–7. [Google Scholar] [CrossRef] [Green Version]
  2. Wan, K.K.; Li, D.H.; Pan, W.; Lam, J.C. Impact of climate change on building energy use in different climate zones and mitigation and adaptation implications. Appl. Energy 2012, 97, 274–282. [Google Scholar] [CrossRef]
  3. Lucon, O.; Ürge-Vorsatz, D.; Zain Ahmed, A.; Akbari, H.; Bertoldi, P.; Cabeza, L.; Eyre, N.; Gadgil, A.; Harvey, L.; Jiang, Y.; et al. Buildings. In Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; Edenhofer, O., Pichs-Madruga, R., Sokona, Y., Farahani, E., Kadner, S., Seyboth, K., Adler, A., Baum, I., Brunner, S., Eickemeier, P., et al., Eds.; IPCC: Geneva, Switzerland, 2014; Chapter 9. [Google Scholar]
  4. Euractiv Climate Change News. Cities Take up Climate Baton at COP23, Make Ambitious Emission Pledges. Available online: https://www.euractiv.com/section/climate-environment/news/cities-take-up-climate-baton-at-cop23-make-ambitious-emission-pledges/ (accessed on 30 November 2018).
  5. Global Covenant of Mayors for Climate and Energy. At COP23, Global Covenant of Mayors Raises the Level of Ambition at the Subnational Level and Local Leaders Showcase their Commitment to Accelerating Global Progress on Climate Change. Available online: http://www.globalcovenantofmayors.org/press/cop-23-gcom-reports-collective-impact-committed-cities-announces-new-ghg-emissions-inventory-standard-cities-local-governments/ (accessed on 30 November 2018).
  6. City of Toronto. Agenda Item PE 19.4.: Transform TO: Climate Action for a Healthy, Equitable and Prosperous Toronto, Report 2: The Pathway to a Low Carbon Future; Sustainable Solutions Group: Vancouver, BC, Canada, 2017. [Google Scholar]
  7. City of Toronto. City of Toronto. 2050: Pathway to a Low-Carbon Toronto, Report 2, Highlights of the City of Toronto Staff Report; Sustainable Solutions Group: Vancouver, BC, Canada, 2017. [Google Scholar]
  8. City of Toronto. TransformTO: Sustainability Solutions Group and what if? Climate Action for a Healthy, Equitable, Prosperous Toronto, Results of Modelling Greenhouse Gas Emissions to 2050; Sustainable Solutions Group: Vancouver, BC, Canada, 2017; p. 42. [Google Scholar]
  9. Canada Green Building Council. Green Building in Canada: Assessing the Market Impacts & Opportunities. Available online: https://www.cagbc.org/cagbcdocs/advocacy/Green_Building_in_Canada_CaGBC_and_Delphi_Report_Executive_Summary.pdf (accessed on 30 November 2018).
  10. Robinson, J.; Cole, R. Theoretical underpinnings of regenerative sustainability. Build. Res. Inf. 2015, 43, 133–143. [Google Scholar] [CrossRef]
  11. Cole, R.J. Net-zero and net-positive design. Build. Res. Inf. 2015, 43, 1–6. [Google Scholar] [CrossRef]
  12. Chu, A.-M.; Cayuela, A.; Robinson, J. Visions and Strategies for Sustainable Buildings and Neighbourhoods—An International Scan of Highly Sustainable Building and Neighbourhood Projects around the World. Report Prepared for Copenhagen Business School, Centre for Interactive Research on Sustainability; The University of British Columbia: Vancouver, BC, Canada, 2015. [Google Scholar]
  13. Gorgolewski, M.; Brown, C.; Chu, A.-M.; Turcato, A.; Barlett, K.; Ebrahimi, G.; Hodgson, M.; Mallory-Hill, S.; Ouf, M.; Scannell, L. Performance of Sustainable Buildings in Colder Climates. J. Green Build. 2016, 11, 131–153. [Google Scholar] [CrossRef]
  14. International Living Future Institute. Available online: https://living-future.org/lbc/ (accessed on 30 November 2018).
  15. Fedoruk, L.; Cole, R.; Robinson, J.; Cayuela, A. Learning from failure: Understanding the anticipated-achieved building energy performance gap. Build. Res. Inf. 2015, 43, 750–763. [Google Scholar] [CrossRef]
  16. Chu, A.-M. Understanding the Performance Gap: An Evaluation of the Energy Efficiency of Three High-Performance Buildings in British Columbia. Master’s Thesis, University of British Columbia, Vancouver, BC, Canada, 2016. [Google Scholar]
  17. Coleman, S.; Robinson, J. Introducing the qualitative performance gap: Stories about a sustainable building. Build. Res. Inf. 2017, 46, 485–500. [Google Scholar] [CrossRef]
  18. Vischer, J. Post-Occupancy Evaluation: A Multifaceted Tool for Building Improvement. In Learning from Our Buildings: A State-of-the-Practice Summary of Post-Occupancy Evaluation; Federal Facilities Council & National Research Council, National Academies Press: Washington, DC, USA, 2001; pp. 23–34. [Google Scholar]
  19. Preiser, W.F.; White, E.; Rabinowitz, H. Post-Occupancy Evaluation (Routledge Revivals); Routledge: Abingdon-on-Thames, UK, 2015. [Google Scholar]
  20. Leaman, A.; Bordass, W.; Cohen, M.; Standeven, M. The Probe Occupant Surveys, Buildings in Use ‘97: How Buildings Really Work; Commonwealth Institute: London, UK, 1997. [Google Scholar]
  21. Heerwagen, J.; Zagreus, L. The Human Factors of Sustainable Building Design: Post Occupancy Evaluation of the Philip Merrill Environmental Center; UC Berkeley Center for the Built Environment: Berkeley, CA, USA, 2005; Available online: https://escholarship.org/uc/item/67j1418w (accessed on 17 December 2018).
  22. Mallory-Hill, S.; Preiser, W.F.E.; Watson, C.G. (Eds.) Enhancing Building Performance; Blackwell Publishing Ltd.: Oxford, UK, 2012; ISBN 978-0-470-65759-1. [Google Scholar]
  23. Liu, L.; Rohdin, P.; Moshfegh, B. Evaluating Indoor Environment of a Retrofitted Multi-Family Building with Improved Energy Performance in Sweden. Energy Build. 2015, 102, 32–44. [Google Scholar] [CrossRef]
  24. Shin, S.; Jeong, S.; Lee, J.; Wan Hong, S.; Jung, S. Pre-Occupancy Evaluation based on user behavior prediction in 3D virtual simulation. Autom. Constr. 2017, 74, 55–65. [Google Scholar] [CrossRef]
  25. Alzoubi, H.; Al-Rqaibat, S.; Bataineh, R.F. Pre-versus post-occupancy evaluation of daylight quality in hospitals. Build. Environ. 2010, 45, 2652–2665. [Google Scholar] [CrossRef]
  26. Reckermann, J. CIRS pre-Occupancy Evaluation: Inhabitant Feedback Processes and Possibilities for a Regenerative Place. Master’s Thesis, University of British Columbia, Vancouver, BC, Canada, 2014. [Google Scholar]
  27. Newsham, G.; Birt, B.; Arsenault, C.; Thompson, L.; Veitch, J.; Mancini, S.; Galasiu, A.; Gover, B.; Macdonald, I.; Burns, G. Do Green Buildings Outperform Conventional Buildings? Indoor Environment and Energy Performance in North American Offices; National Research Council Canada: Vancouver, BC, Canada, 2012; Available online: http://nparc.nrc-cnrc.gc.ca/eng/view/object/?id=1714b57c-88c2-4dec-953a-0e640b7db12b (accessed on 30 November 2018).
  28. Singh, A.; Syal, M.; Grady, S.C.; Korkmaz, S. Effects of green buildings on employee health and productivity. Am. J. Public Health 2010, 100, 1665–1668. [Google Scholar] [CrossRef] [PubMed]
  29. Schreuder, E.; van Heel, L.; Goedhart, R.; Dusseldorp, E.; Schraagen, J.M.; Burdorf, A. Effects of newly designed hospital buildings on staff perceptions: A pre-post study to validate design decisions. HERD-Health Environ. Res. 2015, 8, 77–97. [Google Scholar] [CrossRef] [PubMed]
  30. van der Zwart, J.; van der Voordt, T.J. Adding value by hospital real estate: An exploration of dutch practice. HERD-Health Environ. Res. 2015, 9, 52–68. [Google Scholar] [CrossRef] [PubMed]
  31. Cole, R.J.; Brown, Z.; McKay, S. Building human agency: A timely manifesto. Build. Res. Inf. 2010, 38, 339–350. [Google Scholar] [CrossRef]
  32. Birt, B.; Newsham, G.R. Post-occupancy Evaluation of Energy and Indoor Environment Quality in Green Buildings: A Review. In Proceedings of the 3rd International Conference on Smart and Sustainable Built Environments, Delft, The Netherlands, 15 June 2009; pp. 1–7. [Google Scholar]
  33. Diamond, R. Evaluating the Energy Performance of the First Generation of LEED-Certified Commercial Buildings; Lawrence Berkeley National Laboratory: Berkeley, CA, USA, 2011. [Google Scholar]
  34. Oates, D.; Sullivan, K. Postoccupancy energy consumption survey of Arizona’s LEED new construction population. J. Constr. Eng. Manag. 2012, 138. [Google Scholar] [CrossRef]
  35. Turner, C.; Frankel, M. Energy Performance of LEED for New Construction Buildings; Final Report; US Green Building Council: Washington, DC, USA, 2008. [Google Scholar]
  36. Newsham, G.; Mancini, S.; Birt, B.J. Do LEED-certified buildings save energy? Yes, but. Energy Build. 2009, 41, 897–905. [Google Scholar] [CrossRef]
  37. Storey, S. Application of Life-Cycle Approaches for the Evaluation of High Performance Buildings. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 2014. [Google Scholar]
  38. Bartlett, K.; Brown, C.; Chu, A.-M.; Ebrahimi, G.; Gorgolewski, M.; Hodgson, M.; Mallory-Hill, S.; Ouf, M.; Scannell, L.; Turcato, A. Do our green buildings perform as intended. In Proceedings of the 2014 Sustainable Building Challenge (SB14), Barcelona, Spain, 28–30 October 2014; Available online: http://iisbecanada.ca/umedia/cms_files/Conference_Paper_1.pdf (accessed on 30 July 2018).
  39. Touchie, M.F.; Pressnail, K.D. Using suite energy-use and interior condition data to improve energy modeling of a 1960s MURB. Energy Build. 2014, 80, 184–194. [Google Scholar] [CrossRef] [Green Version]
  40. Touchie, M.F.; Binkley, C.; Pressnail, K.D. Correlating Energy Consumption with Multi-Unit Residential Building Characteristics in the City of Toronto. Energy Build. 2013, 66, 648–656. [Google Scholar] [CrossRef]
  41. Bordass, W.; Cohen, R.; Field, J. Energy Performance of Non-Domestic Buildings: Closing the Credibility Gap; Building Performance Congress: Frankfurt, Germany, 2004; Available online: http://www.usablebuildings.co.uk/Pages/Unprotected/EnPerfNDBuildings.pdf. (accessed on 30 November 2018).
  42. Gram-Hanssen, K.; Georg, S. Energy performance gaps: Promises, people, practices. Build. Res. Inf. 2018, 46, 1–9. [Google Scholar] [CrossRef]
  43. Sunikka-Blank, M.; Galvin, R.; Behar, C. Harnessing social class, taste and gender for more effective policies. Build. Res. Inf. 2018, 46, 114–126. [Google Scholar] [CrossRef]
  44. van den Brom, P.; Meijer, A.; Visscher, H. Performance gaps in energy consumption: Household groups and building characteristics. Build. Res. Inf. 2018, 46, 54–70. [Google Scholar] [CrossRef]
  45. Gram-Hanssen, K. Residential heat comfort practices: Understanding users. Build. Res. Inf. 2010, 38, 175–186. [Google Scholar] [CrossRef]
  46. Robinson, J.F.; Foxon, T.J.; Taylor, P.G. Performance gap analysis case study of a non-domestic building. Proc. Inst. Civ. Eng. Eng. Sustain. 2016, 169, 31–38. [Google Scholar] [CrossRef] [Green Version]
  47. Bartlett, K.; Brown, C.; Chu, A.-M.; Ebrahimi, G.; Gorgolewski, M.; Hodgson, M.; Issa, M.; Mallory-Hill, S.; Ouf, M.; Scannell, L.; et al. Poster for Canadian Building Performance Evaluation Project. In Proceedings of the 2014 Sustainable Building Challenge (SB14) Conference, Barcelona, Spain, 28–30 October 2014; Available online: http://iisbecanada.ca/umedia/cms_files/Canada_Overview_Poster_V5.pdf (accessed on 17 December 2018).
  48. Schwartz, P. The Art of the Long View: Planning for the Future in an Uncertain World; Doubleday: New York, NY, USA, 1991; ISBN 978-0385267328. [Google Scholar]
  49. Wiebe, K.; Zurek, M.; Lord, S.; Brzezina, N.; Gabrielyan, G.; Libertini, J.; Westhoek, H. Scenario development and foresight analysis: Exploring options to inform choices. Annu. Rev. Environ. Resour. 2018, 43, 1.1–1.26. [Google Scholar] [CrossRef]
  50. Robinson, J. Energy backcasting A proposed method of policy analysis. Energy Policy 1982, 10, 337–344. [Google Scholar] [CrossRef]
  51. Robinson, J.; Burch, S.; Talwar, S.; O’Shea, M.; Walsh, M. Envisioning sustainability: Recent progress in the use of participatory backcasting approaches for sustainability research. Technol. Forecast. Soc. Chang. 2011, 78, 756–768. [Google Scholar] [CrossRef]
  52. Cole, R.J.; Robinson, J.; Brown, Z.; O’Shea, M. Re-contextualizing the notion of comfort. Build. Res. Inf. 2008, 36, 323–336. [Google Scholar] [CrossRef]
  53. Brown, Z.; Cole, R.J. Influence of occupants’ knowledge on comfort expectations and behaviour. Build. Res. Inf. 2009, 37, 227–245. [Google Scholar] [CrossRef]
  54. Brown, Z. Occupant Comfort and Engagement in Green Buildings: Examining the Effects of Knowledge, Feedback and Workplace Culture. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 2009. [Google Scholar]
  55. Brown, Z.; Dowlatabadi, H.; Cole, R. Feedback and adaptive behaviour in green buildings. Intell. Build. Int. 2009, 1, 296–315. [Google Scholar] [CrossRef]
  56. Way, M.; Bordass, B. Making feedback and post-occupancy evaluation routine 2: Soft landings–involving design and building teams in improving performance. Build. Res. Inf 2005, 33, 353–360. [Google Scholar] [CrossRef]
  57. Grbich, C. Qualitative Data Analysis: An Introduction, 2nd ed.; Sage: Abingdon-on-Thames, UK, 2013. [Google Scholar]
  58. Saldaña, J. The Coding Manual for Qualitative Researchers, 2nd ed.; Sage: Abingdon-on-Thames, UK, 2013. [Google Scholar]
  59. Kvale, S.; Brinkmann, S. Interviews: Learning the Craft of Qualitative Research, 2nd ed.; Sage: Abingdon-on-Thames, UK, 2009. [Google Scholar]
  60. Bazeley, P. Master Class: Theory and Practice of Qualitative Data Analysis in NVivo, NVivo Symposium with Pat Bazeley. 2015. SFU Harbour Centre, Vancouver: Symposium conducted by SFU Library’s Research Commons. Available online: https://scarp.ubc.ca/school/weekly-digest/2015/03/nvivo-symposium-and-advanced-workshop-qualitative-data-analysis-april (accessed on 17 December 2018).
  61. Bazeley, P. Analysing qualitative data: More than ‘identifying themes’. Malays. J. Qual. Res. 2009, 2, 6–22. [Google Scholar]
  62. Coleman, S. Normalizing Sustainability in a Regenerative Building: The Social Practice of Being at CIRS. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 2016. [Google Scholar]
  63. Berkhout, F. Normative expectations in systems innovation. Technol. Anal. Strateg. 2006, 18, 299–311. [Google Scholar] [CrossRef] [Green Version]
  64. Janda, K.B.; Topouzi, M. Telling tales: Using stories to remake energy policy. Build. Res. Inf. 2015, 43, 516–533. [Google Scholar] [CrossRef]
  65. Janda, K.B.; Topouzi, M. Closing the loop: Using hero stories and learning stories to remake energy policy. In Rethink, Renew, Restart, Proceedings of the ECEEE Summer Study, Belambra Les Criques, Toulon/Hyères, France, 3–8 June 2013; European Council for an Energy-Efficient Economy: Brussels, Belgium, 2013; ISBN 978-91-980482-3-0. [Google Scholar]
  66. Rotmann, S.; Mourik, R.; Goodchild, B. Once Upon a Time... How to tell a good energy efficiency story that ‘sticks’. In First Fuel Now, Proceedings of the ECEEE 2015 Summer Study, Stockholm, Sweden, 1–6 June 2015; European Council for an Energy-Efficient Economy: Brussels, Belgium, 2015; ISBN 978-91-980482-7-8. [Google Scholar]
  67. Chu, A.-M.; Ebrahimi, G.; Scannell, L.; Save, P.; Hodgson, M.; Bartlett, K.; Gorgolewski, M. Building Performance Evaluation for Jim Pattison Centre of Excellence in Sustainable Building Technologies and Renewable Energy Conservation. 2015. Available online: http://iisbecanada.ca/umedia/cms_files/Report_-_JPCOE_Final_Feb2015.pdf (accessed on 17 December 2018).
  68. Thatcher, A.; Milner, K. Changes in productivity, psychological wellbeing and physical wellbeing from working in a ‘green’ building. Work 2014, 49, 381–393. [Google Scholar] [CrossRef] [PubMed]
  69. Paevere, P.; Brown, S. Indoor Environment Quality and Occupant Productivity in the CH2 Building: Post-Occupancy Summary. Report No. USP2007/23. Available online: http://dro.deakin.edu.au/eserv/DU:30018085/luther-indoorenvironment-2008.pdf (accessed on 30 November 2018).
  70. McCunn, L.J.; Gifford, R. Do green offices affect employee engagement and environmental attitudes? Architect. Sci. Rev. 2012, 55, 128–134. [Google Scholar] [CrossRef]
  71. Gray, G.; Guppy, N. Successful Surveys: Research Methods and Practice, 4th ed.; Thomson Nelson: Nashville, TN, USA, 2008. [Google Scholar]
  72. Deuble, M.P.; de Dear, R.J. Is it hot in here or is it just me? Validating the post-occupancy evaluation. Intell. Build. Int. 2014, 6, 112–134. [Google Scholar] [CrossRef]
  73. Dunn, E.W.; Wilson, T.D.; Gilbert, D.T. Location, location, location: The misprediction of satisfaction in housing lotteries. Pers. Soc. Psychol. Bull. 2003, 29, 1421–1432. [Google Scholar] [CrossRef] [PubMed]
  74. Leaman, A.; Bordass, B. Assessing building performance in use 4: The Probe occupant surveys and their implications. Build. Res. Inf. 2001, 29, 129–143. [Google Scholar] [CrossRef]
  75. Mang, P.; Reed, B. Designing from place: A regenerative framework and methodology. Build. Res. Inf. 2012, 40, 23–38. [Google Scholar] [CrossRef]
  76. Cranz, G.; Lindsay, G.; Morhayim, L.; Lin, A. Communicating sustainability: A postoccupancy evaluation of the david brower center. Environ. Behav. 2014, 46, 826–847. [Google Scholar] [CrossRef]
  77. Deuble, M.P.; de Dear, R.J. Green occupants for green buildings: The missing link? Build. Environ. 2012, 56, 21–27. [Google Scholar] [CrossRef]
  78. Leaman, A.; Bordass, B. Are users more tolerant of ‘green’ buildings? Build. Res. Inf. 2007, 35, 662–673. [Google Scholar] [CrossRef] [Green Version]
  79. Rapoport, A. Human Aspects of Urban form–Towards a Man-Environment Approach to Urban form and Design; Pergamon Press: New York, NY, USA, 1977; ISBN 9781483156828. [Google Scholar]
  80. Leaman, A. Dissatisfaction and office productivity. Facilities 1995, 13, 13–19. [Google Scholar] [CrossRef]
  81. Guerra-Santina, O.; Aidan, C.A. In-use monitoring of buildings: An overview of data collection methods. Energy Build. 2015, 93, 189–207. [Google Scholar] [CrossRef]
  82. Day, J.K.; O’Brien, L. Oh behave! Survey stories and lessons learned from building occupants in high-performance buildings. Energy Res. Soc. Sci. 2011, 31, 11–20. [Google Scholar] [CrossRef]
  83. Altomonte, S.; Schiavon, S. Occupant Satisfaction in LEED and BREEAM-certified office buildings. In Proceedings of the 36th International Conference on Passive and Low Energy Architecture, Cities, Buildings, People: Towards Regenerative Environments, Los Angeles, CA, USA, 11–13 July 2016. [Google Scholar]
  84. Geng, Y.; Ji, W.; Zhu, Y. The impact of thermal environment of occupant IEQ perception and productivity. Build. Environ. 2017, 121, 158–167. [Google Scholar] [CrossRef]
  85. Newsham, G.; Mancini, S.; Veitch, J.; Marchand, R.; Lei, W.; Charles, K. Control strategies for lighting and ventilation in offices: Effects on energy and occupants. Intell. Build. Int. 2009, 1, 101–121. [Google Scholar] [CrossRef]
  86. Leder, S.; Newsham, G.R.; Veitch, J.A.; Mancini, S.; Charles, K.E. Effects of office environment on employee satisfaction: A. new analysis. Build. Res. Inf. 2016, 44, 34–50. [Google Scholar] [CrossRef]
  87. Choi, J.; Loftness, V.; Aziz, A. Post-occupancy evaluation of 20 office buildings as basis for future IEQ standards and guidelines. Energy Build. 2012, 46, 167–175. [Google Scholar] [CrossRef]
  88. Dascalaki, E.; Sermpetzoglou, V. Energy performance and indoor environmental quality in Hellenic schools. Energ Build. 2011, 43, 718–727. [Google Scholar] [CrossRef]
  89. Chang, C.; Touchie, M.F. Investigating Wintertime Thermal Comfort of Post-War Multi-Unit Residential Buildings using Surveys and In-suite Monitoring. In Proceedings of the 15th Canadian Conference on Building Science and Technology, CCBST, Vancouver, BC, Canada, 6–8 November 2017. [Google Scholar]
  90. Touchie, M.F.; Tzekova, E.S.; Siegel, J.A.; Purcell, B.; Morier, J. Evaluating Summertime Overheating in Multi-Unit Residential Buildings using Surveys and In-Suite Monitoring. In Proceedings of the 13th International Conference on Thermal Performance of the Exterior Envelopes of Whole Buildings XIII, Clearwater, FL, USA, 5–8 December 2016. [Google Scholar]
Figure 1. Sustainable Built Environment Performance Assessment Framework.
Figure 1. Sustainable Built Environment Performance Assessment Framework.
Sustainability 10 04829 g001

Share and Cite

MDPI and ACS Style

Coleman, S.; Touchie, M.F.; Robinson, J.B.; Peters, T. Rethinking Performance Gaps: A Regenerative Sustainability Approach to Built Environment Performance Assessment. Sustainability 2018, 10, 4829. https://0-doi-org.brum.beds.ac.uk/10.3390/su10124829

AMA Style

Coleman S, Touchie MF, Robinson JB, Peters T. Rethinking Performance Gaps: A Regenerative Sustainability Approach to Built Environment Performance Assessment. Sustainability. 2018; 10(12):4829. https://0-doi-org.brum.beds.ac.uk/10.3390/su10124829

Chicago/Turabian Style

Coleman, Sylvia, Marianne F. Touchie, John B. Robinson, and Terri Peters. 2018. "Rethinking Performance Gaps: A Regenerative Sustainability Approach to Built Environment Performance Assessment" Sustainability 10, no. 12: 4829. https://0-doi-org.brum.beds.ac.uk/10.3390/su10124829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop