Next Article in Journal
Abundant Publications but Minuscule Impact: The Irrelevance of Academic Accounting Research on Practice and the Profession
Next Article in Special Issue
Research Ethics, Open Science and CRIS
Previous Article in Journal / Special Issue
Open Access Perceptions, Strategies, and Digital Literacies: A Case Study of a Scholarly-Led Journal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of Factors Causing Bias in Marketing- Related Publications

by
Mangirdas Morkunas
1,*,
Elzė Rudienė
2,
Lukas Giriūnas
3 and
Laura Daučiūnienė
3
1
Division of Farms’ and Enterprises’ Economics, Lithuanian Institute of Agrarian Economics, Vivulskio str. 4A, 03220 Vilnius, Lithuania
2
Business School, Vilnius University, Sauletekio ave. 21, 10222 Vilnius, Lithuania
3
Faculty of Public Governance and Business, Mykolas Romeris university, Ateities str. 20, 08303 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Submission received: 13 September 2020 / Revised: 14 October 2020 / Accepted: 15 October 2020 / Published: 24 October 2020
(This article belongs to the Special Issue Publication Ethics and Research Integrity)

Abstract

:
The present paper aims at revealing and ranking the factors that most frequently cause bias in marketing-related publications. In order to rank the factors causing bias, the authors employed the Analytic Hierarchy Process method with three different scales representing all scale groups. The data for the study were obtained through expert survey, which involved nine experts both from the academia and scientific publishing community. The findings of the study confirm that factors that most frequently cause bias in marketing related publications are sampling and sample frame errors, failure to specify the inclusion and exclusion criteria for researched subjects and non-responsiveness.

1. Introduction

Bias can be defined as any systematic error in the design, conduct or analysis of a study. In research, bias occurs when “systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others”. Bias can attend at any phase of research, including study design or data collection, as well as in the process of data analysis and publication. Bias is not a dichotomous variable [1,2].
Most fields of science, including social sciences, are currently facing a deep ‘reproducibility crisis’ [3,4,5]. Research is relevant in all fields of science, whether conducted in the form of experiments, quantitative, qualitative studies or multidimensional research, so bias can manifest itself at every stage of the research process. Bias patterns and risk factors can thus be assessed across multiple topics within a discipline, across disciplines or larger scientific domains (social, biological and physical sciences), and across all of science [6,7,8,9]. Many of the biases described can be accepted in literature though their extent still remains unspecified. It is likely that different biases pose different threats to different disciplines. The interests of the authors of the present paper relate precisely to the biases of marketing research in interpreting and researching biases characteristics. Different scholars [6,10,11] have confirmed that bias is uneven. The bias in research could be manifested in a number of ways, as found in literature [5,12,13,14]. Publication or literature biases can be considered among the most frequently mentioned and discussed. Bias is also relevant in life sciences [15,16,17,18,19,20], psychology [21,22] education [23,24] and economics [25,26].
An important bias of studies is manifested in its lower precision [27]. This could be related to the fact that minor studies might report effect of larger magnitude. This problem could stem from the genuine heterogeneity in study design [6] and relevant in different areas of research and related to measurement bias that occurs during the process and reflects a discrepancy between the information collected and the information the researcher seeks to obtain. As confirmed by [28], “publications from authors working in the United States might overestimate the effect of sizes, a difference that could be due to multiple sociological factors”. Bias in measurement methods is very important for research processes [29] especially in medicine [30,31,32,33] and in social sciences [34,35]. Low precision biases in marketing related publications could be important and must be also properly taken into account.
Another type of bias related to previous studies reporting an effect might overestimate its magnitude relative to later studies. This is due to a decreasing field-specific publication bias over time or to differences in study design between the earlier and the later studies [6,36]. On the other hand, the decline effect might see earlier studies reporting extreme effects in any direction, because controversial findings have better opportunities to be published [36,37].
All the biases in research mentioned earlier could be considered to be technical, and representing only one side of biases. Usually, psychological and sociological factors that may lead to the bias patterns as described above [6,38], such as pressures to publish, are forgotten: when scientists subjected to direct or indirect pressures to publish are more likely to exaggerate the magnitude and importance of their results to secure many high-impact publications and new grants [39,40]. Furthermore, there is peer control, when researchers working in close collaboration are able to mutually control each other’s work and might therefore be less likely to engage in questionable research practices (QRP) [6,41]. If so, risk of bias might be lower in collaborative research but, adjusting for this factor, higher in long-distance collaborations [6].
One more psychological factor—career stage—when early-career researchers might be more likely to engage in QRP, because they are less experienced and have more to gain from taking risks [42]. However unbelievable it seems, a certain role upon the research bias is played by the gender of the researcher: males are more likely to take risks to achieve higher status and might therefore be more likely to engage in QRP. This hypothesis was supported by statistics of the US Office of Research Integrity [43], which, however, may have multiple alternative explanations [44]. In case of psychological factors, it is necessary to bear in mind individual integrity: narcissism or other psychopathologies underlie misbehavior and unethical decision making and therefore might also affect individual research practices [45,46].
In the area of marketing related publications all of the types of biases as covered above are relevant, the issue is, however, to identify which of them are more important, and which are less relevant in in marketing related publications. So, the novelty of the study covered by the present paper lies in identifying and ranking the factors that most frequently cause bias in marketing literature. The findings of this study also have practical implications, as they serve as a guide to young researchers in marketing area helping to produce more robust and relevant results by avoiding most common mistakes. The study reported in the present paper complements a number of studies by [47,48,49] by providing an importance rankings of bias inducing factors.

2. Materials and Methods

2.1. The Researched Factors

An overview of the literature sources in the area showed a diversity of biases prevailing in different fields of science and the related research. This is understandable because different research addresses different issues, uses different methods; nevertheless, it is still possible to group and combine most relevant biases. The material in the relevant research papers shows that biases are examined in medical and natural sciences, somewhat less frequently in the social sciences, and very little in the marketing area. It should be noted that many of the biases discussed are also relevant in marketing research; however, there is a lack of more detailed research in this aspect. Therefore, our study must show the relevance of biases in marketing research and their ranking in terms of importance (with the help of experts and their position).
Based on our observations from the literature review, the authors of the present paper distinguished 10 factors that are responsible for the biggest part of research bias in marketing related publications.
(1)
Failure to examine and critically assess prior literature:
Most studies start with an idea, question or a topic, sometimes rather numerous—are not new, and had already been previously studied. Often such bias emerges due to a failure to evaluate the issue in previous studies or literature. Researchers and statisticians [14,50] have documented publication bias across a variety of academic disciplines including behavioral sciences [51], education [52,53], special education [54,55], ecology [56], medicine [28,57,58,59,60,61], psychology [13,50,51,62,63,64,65,66] and theatre and performance [5]. As [67] mentioned, using databases, such as Emerald, EBSCO, Jstor and ScienceDirect, it is necessary to read all more detailed articles on the chosen topic. However, it should be noted that unpublished scientific results may differ systematically from published results [68,69,70]. Such bias is referred to as publication bias that can affect the literature analysis. According to [54], there are two possible approaches for correct literature analysis: search and inclusion procedures and a formal and statistical approach. It involves, “conducting searches (such as electronic database search, hand searching of journals, contacting experts) to identify all relevant studies, including gray literature (i.e., unpublished studies)”. Greenhalgh and Peacock [71] have confirmed that using data from electronic databases is the result of a failure to carry out studies. An important aspect of bias in literature and publications is gray literature bias [6,10,71,72]. Polanin, Tanner-Smith and Hennessey defined gray literature as, “literature can be broadly thought of as anything not published in a professional journal, including dissertations, policy reports, conference proceedings, book chapters, or otherwise unpublished studies” [72].
(2)
Failure to specify the inclusion and exclusion criteria for study subjects:
Researchers often omit the specification of the research subject, by engaging inclusion and exclusion criteria. If the criteria were named, other researchers would understand why the current results may differ from other published studies. To be eligible for publication, research papers needed to be based on and refer to empirical research, rather than on commentaries, letters, editorials or reviews. This process involved identification, screening and inclusion and exclusion criteria [8].
(3)
Failure to determine and report errors in measurement methods:
Measurement bias occurs during the process and reflects a discrepancy between the information collected and the information the researcher seeks to obtain. Reference [73] confirmed that response rates to surveys have fallen, in particular, in developed countries, which highlights the actual problem regarding measurements methods in researches. Some authors such as [74,75,76] state that non-response rate is caused by a practice of bringing to the study a sample of reluctant people, who may provide data filled with measurement errors. Questions arise when this hypothesized relationship between low propensity to respond and measurement error arises. The first has to do with the quality of the statistics (e.g., means, correlation coefficients) computed on the basis of a survey. That is, does the mean square error of a statistic increase when sample persons who are less likely to be contacted or cooperate are incorporated into the response? The level of effort analyses the change in statistics over increased levels of effort, taking change in the statistics to indicate the risk of nonresponse bias, and no change to indicate the absence of risk. However, if measurement error is correlated with level of effort (or response propensity), then an observed change or lack of change in the statistic may be due to measurement error and not to nonresponse bias [29].
(4)
Failure to specify the exact statistical assumptions made in the analysis and failure to perform sample size analysis before starting the study:
When an omitted variable (i.e., an unmeasured variable not included in a model) creates a correlation between the error terms in these two stages, traditional techniques, such as ordinary least squares (OLS) regression, may report biased coefficient estimates [77]. Since most studies will include statistical analysis of the data, specifying the level of significance (called the alpha level) that is acceptable and the exact statistical tests methods used is commonplace. Most trials that claim two methods are equivalent (or non-superior) or underpowered, which means they have too few subjects. The sample size must be reasonable in order to obtain statistically reliable results. As stated in [78], prior studies as an estimation can be used, but “although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. It is shown that the use of this approach often results in underpowered studies, sometimes to an alarming degree” (p.1547).
(5)
Improper Specification of the Population:
It is a biased study of population that loses validity in relation to the degree of the bias [7,79] in their research, compared the precision and bias of projections of total population with the precision and bias of projections by different dimensions and country level. Population specification biases could occur when a researcher does not understand what the object of the study was. Many researchers have used time series models to construct population forecasts and prediction intervals at the national level, but few have evaluated the accuracy of their forecasts or the out-of-sample validity of their prediction intervals [79]. Researchers studying bias in population specification have focused on patterns of overall population growth [80,81], while others have examined individual components, such as mortality and fertility, migration [41,82], but they were all linked by key issues, such as uncertainty in population forecasts, the development of models that provide specific measures of uncertainty.
(6)
Sampling and Sample Frame Errors:
Survey sampling and sample frame errors occur when the wrong subpopulation is used to select a sample, or because of variation in the number or representativeness of the sample that responds, but the resulting sample is not representative of the population concerned. In some cases, sample selection bias can lead researchers to find significant relationships that do not exist, or in other cases it can lead researchers to fail to find significant relationships that do exist [77] (Bias in sampling and sample frame can occur when including inappropriate objects with or without certain characteristics [83].
(7)
Selection Errors:
Determining whether or not an observation in an overall population appears in its final representative sample is the first stage, and modeling the relation between the hypothesized dependent and independent variables in the final [77]. This bias is related to the sampling error, sample selected by a non-probability method. It could happen when respondents choose to self-participate in a study and only those interested respond; you can end up with selection error because there may already be an inherent bias. This can also occur when respondents who are not relevant to the study participate, or when there is a bias in the way participants are put into groups.
(8)
Non-Responsiveness:
Nonresponse error can exist when an obtained sample differs from the original selected sample. (2002) [73] confirm, that best practices argue that researchers should attempt to maximize response rates and to minimize risk of nonresponse errors [84]. However, research [85,86,87] has called the traditional view into question by showing no strong relationship between nonresponse rates and nonresponse bias [29]. This may occur because either the potential respondent was not contacted or the respondent refused to respond. The key factor is the absence of data, rather than inaccurate data. An increase in mean square error could occur because (a) incorporating the difficult to contact or reluctant respondents results in no nonresponse bias in the final estimate, but measurement error does exist, or (b) nonresponse bias exists, but the measurement error in these reluctant or difficult to contact respondents’ reports exceeds the nonresponse bias [73]. The second question has to do with methodological inquiries for detecting nonresponse bias. Although many types of analyses of nonresponse bias can be conducted, four predominant approaches have been used: (1) comparing characteristics of the achieved sample, usually the demographic characteristics, with a benchmark survey [88], (2) comparing frame information for respondents and nonrespondents [89], (3) simulating statistics based on a restricted version of the observed protocol [85], often called a “level of effort” analysis, and (4) mounting experiments that attempt to produce variation in response rates across groups known to vary on a survey outcome of interest [90] Findings from these studies show that nonresponse bias varies across individual statistics within a survey and is relatively larger on project needs to be able to answer the question.
(9)
Missing data, dropped subjects and use of an intention to treat analysis:
It must be acknowledged that incomplete or missing data and the publication of such data can materially vary. This may in particular be related to the presentation of specific results when the analysis was performed but not properly described. This may be due to not only to the lack of understanding on the investigator’s part, but also to coincidence, on the other hand. There is a lack of understanding in reporting and information, as it can be assumed that the data or part of the results are not relevant.
More ad hoc research methods can be used to supplement missing data [10,91]. It can always be assumed that the lack of data is accidental, but the reason for that is bias. Negative research findings that are likely to outweigh the number of positive findings continue to be sidelined, not published in unused files [9,12,50,51,92]. Fanelli [6] examined the situation of negative and insignificant results in a study. The empirical studies in [6] have shown that negative or insignificant results are fairly common, and in social sciences in particular.
The loss of negative data is due to the fact that results that do not meet expectations and/or contradict the hypothesis are necessary for scientific progress. Negative findings are important to consider because they encourage researchers to think critically, reassess or from a different angle, correct, and perhaps confirm, their current beliefs and move forward [60]. It is therefore essential that all findings—positive, negative and non-existent—are made available to researchers in order to ensure a fair and comprehensive summary of research to inform policy, practice or research.
(10)
Problems in pointing out weaknesses of own study:
The authors of [24] empirically identified the following weaknesses in studies: (1) lack of an underlying theory of action, (2) disproportionate reliance on descriptive data, (3) conflation of correlation with causation, (4) problems in measurement and statistical analyses, (5) absence of study replication, (6) weak designs without comparability between library and non-library groups and (7) evidence of publication bias focusing on positive results. Outcome reporting biases could be the most problematic as every researcher believes that his/her study is good without any weaknesses. Research by psychologists has shown that at least 63% of researchers have not published full research results [93], thus not acknowledging any weaknesses in their work. This can be treated as ignoring the identification of study weaknesses and biasing the results. Withholding negative, inconclusive or nonsignificant findings distorts the understanding of research within a domain and causes the potential benefits of an intervention to be overestimated [57].

2.2. Analytic Hierarchy Process Method

Analytic Hierarchy Process (AHP) method found application in marketing science in the first year of its invention [92]. It is used in strategic marketing planning [94], analyzing marketing mix [95], revealing consumer intentions [96], assessing determinants of purchase decisions [97] evaluating marketing personnel [98] and in comprehensive market evaluation [99]. AHP application is also common in publishing research [100,101,102]. It is also a common tool for ranking independent factors having impact on a complex phenomenon [103,104,105]. In order to ensure robustness of the results, we chose an AHP method with three different scales, representing three main scale groups, i.e., from first category, we chose inverse linear scale [106], logarithmic [107] and a power scale [108] from the second and the third categories. Once the Eigenvectors using all the three scales were computed, the next step was the normalization of the obtained results. We chose an AHP as a tool for our research, because it is a suitable technique for evaluating phenomena that cannot be assessed using purely quantitative method [109]. The number of factors potentially causing bias in marketing related publications was limited to 10 positions (the maximum number of alternatives that AHP method is capable to process adequately).
The factors used in the research were the following: failure to examine and critically assess the background research literature; failure to specify the inclusion and exclusion criteria for researched subjects; failure to determine and report the error of measurement methods; failure to specify the exact statistical assumptions made in the analysis and failure to perform sample size analysis before the study begins; improper population specification; sampling and sample frame errors; selection errors; non-responsiveness; missing data, dropped subjects and use of an intention to treat analysis; and problems to point out the weaknesses of own study. The inquiry method used for the purpose of the study was an interview, involving nine experts. Six of the experts were professors at the Business schools and/or Marketing/Management departments at the Universities in Lithuania, Poland and the Czech Republic. Three experts work as editors-in-chief or the managing editors of Scopus indexed business and economics related journals. The number of experts exceeds the required validity threshold [110].
Research papers in the area recognize 11 different AHP measurement scales organized into three different categories that are suitable for research [111]. It is considered that there are no significant differences in research outcomes because of the different measurement scales, although in order to ensure robustness of the results, it is recommended to use the combination of different scales. We chose three different measurement scales, representing all three categories. The mathematical expression of selected scales is presented below:
Inverse linear scale : c = 9 10 x ;   Logarithmic scale : c = log a ( x + a 1 ) ;   and Power scale : c = x a ;
where: x—value on the integer judging scale for pairwise comparisons from 1 to 9, c—a ratio used as entry into the decision matrix [112] (p.3).
A typical data processing process in AHP appears to be as follows:
At first, experts are presented with pair-wise comparison matrices. After all of the experts evaluated the factors causing biasness in marketing related publications by using an ex ante prepared pair-wise questionnaire form, each completed questionnaire is checked for consistency. The matrix is considered consistent if p i k = p i j p j k ,     i ,   j ,   k , and a priority vector w exists; then, w = ( ω 1 ,   ,   ω n ) , where p i j = ω i ω j ,     i ,   j . For the calculation of Consistency Index of experts, λ m a x is calculated for every matrix:
λ m a x = j = 1 n ( P · v ) j n · v j ;
here:
  • λ m a x —largest eigenvector of each standardized matrix;
  • n—number of independent rows in the matrix;
  • νj—eigenvector of matrix.
A filled expert pair-wise comparison matrix A is considered consistent when, λ m a x = n , although in a real-life situation, it happens quite infrequently. In case a marginal pij changes, matrix A satisfies the preselected compatibility threshold (0.2 was selected) and λ m a x becomes close to n. After calculating the eigenvalue λ m a x , the Consistency Index CI is being calculated:
C I = λ m a x n n 1 ;
here:
  • CI—Consistency Index;
  • n—number of possible alternatives.
Consistency Index is being used for calculation the overall Consistency Ratio:
C R = C I R I ;
where:
  • CR—Consistency Ratio;
  • RI—random Index.
If matrices show CI < 0.2, the aggregated expert evaluation indices are calculated using a geometrical mean formula:
p i j P = p i j 1 × p i j 2 × × p i j n n ;
where:
  • p i j A —aggregated evaluation of element, belonging to i row and j column;
  • n—number of matrices of the pair-wise comparison of each expert.
When new aggregated matrixes are being calculated, consistency check procedure has to be performed again. If a matrix is found consistent, then preferred ranks of alternatives are being calculated using formula:
ω j = j = 1 i p i j P i j = 1 i j = 1 i p i j P i ;
where: ω j—weight of j alternative.
In case the matrices are consistent, but expert evaluations are significantly dispersed, index of expert mutual agreement (S*) is being calculated [113]:
S * = 1 exp ( H β ) exp ( H α min ) exp ( H γ max ) 1 exp ( H α min ) exp ( H γ max ) ;
where:
  • H α —Shannon alpha diversity;
  • H β —Shannon beta diversity;
  • H γ —Shannon gamma diversity.

3. Results and Discussion

The results of calculations are presented in Table 1, Table 2 and Table 3:
Although differences in reliability indicators obtained using different scales are truly marginal, the highest level of consistency index (83.5%) was derived using Logarithmic scale. It is rather an accidental result, not a rule, and should be attributed to the characteristics of data researched as there is no undisputable proof about superiority of one scale above others in terms of consensus index and employment of a combination of scales is preferred in order to achieve the robustness of results [112].
The values of computed eigenvectors are presented in Table 2.
Analyzing the eigenvectors of research bias inducing factors, we grouped them into two distinct groups: important factors—items whose eigenvectors are above 0.1 (sampling and sample frame errors, failure to specify the inclusion and exclusion criteria for researched subjects, non-responsiveness, failure to examine and critically assess the prior literature and Selection errors). The rest are less important. However, in this group differences still exist, as last ranked factor—problems to point out weaknesses of your own study has twice lower eigenvector value compared to improper population specification, which indicates its lesser influence on occurrence of bias in marketing related publication.
The results obtained confirm the necessity of using a combination of different measurement scales, as results obtained with inverse and logarithmic/power scales differ not only in eigenvector values, but also in ranks of studied factors. Although differences in results derived by logarithmic and power scales are not significant, and vary only in eigenvector values, results of the inverse scale show differences also in ranks. In order to offset these differences, the results were normalized. The process was followed by the computation of the final rank of researched factors that induces bias in marketing related publications (Table 3).
In different studies publication biases are indicated and analyzed by one or few most important or dominated factors, especially in marketing publications.
An analysis of the most frequent cause for biases in marketing related publications led the authors of the paper to the conclusion that the most important factors causing bias are sampling and sample frame errors and failures to specify the inclusion and exclusion criteria for the study subjects (see Table 3).
As evidenced by the data in Table 1, these results are confirmed by normalized eigenvector of power scale as the biggest one, logarithmic scale as well. This ranking shows the importance of sampling and sample frame errors in marketing publications. In different sciences this type of biases bear different importance. For example, study in medicine science made by Lin [114] shows that “sampling error did not cause noticeable bias but the standardized mean difference, odds ratio, risk ratio, and risk difference suffered from this bias to extents” [114] research results shows importance of sampling and sample frame errors in a different approach: how to decrease the sample size to a stratified sample design to achieve an equivalent precision. The second in ranking importance by ranking in most frequently caused biases in marketing related publications is failure to specify the inclusion and exclusion criteria for study subjects. Researchers use inclusion and exclusion criteria to determine characteristics of the subjects or elements in a study. Typical inclusion criteria might be demographic, geographic and occupational groups [115]. Exclusion criteria are not the opposite of inclusion criteria: they identify attributes that prevent a person from being included in the study [116]. The fundamental problem still arises when the researchers do not define inclusion and exclusion criteria clearly. Simply indicating subjects in the study met inclusion criteria is insufficient and does not allow readers to judge the validity of the decision. Selecting inclusion criteria that are not related to the research object and do not describe the variables in sufficient detail is another potential research pitfall.
Ranking third among the most frequent cause of bias in marketing related publications is the non-responsiveness bias. Reference [117] confirmed general decline in survey response rates in their research. The new wave of online polls could be as alternative; nevertheless, even that does not ensure that the non-responsiveness biases can be avoided. Non-responsiveness bias in marketing related publications are most common for different types of surveys. Probability based surveys still display less bias than non-probability surveys [118]. A way to address the non-responsiveness bias is to choose correct type of survey.
One more frequent reason for bias in marketing is related publications—literature bias (see Table 2). As was mentioned in the literature review as part of the present paper, this type of bias is relevant in many fields of science; however, it is less common in social sciences [119]. In marketing-related publications, publication and literature bias are directly related. The findings of the studies conducted by [119] show that “there is a strong relationship between the results of a study and whether it was published, a pattern indicative of publication bias”.
Selection bias in was ranked fifth in the scale of the most frequent cause of bias in marketing- related publications (see Table 2). This bias is related to the sampling error. Bias like selection is present and relevant in different fields of science, and in medicine in particular. Apparently, selection bias is also important in marketing publications. Selection bias could be addressed before starting the study [120] as it is hidden problem [121]. One of the possible ways to avoid selection biases is to contact someone who is knowledgeable about causal inference methods [121].
The further two factors that most frequently cause biases in marketing related publications, i.e., improper population specification and failure to determine and report the error of measurement methods, show more than 0.1 in logarithmic scale. Those two factors are important, although to a lesser extent than the first five covered earlier.
The last three factors show less importance in most frequently caused biases in marketing related publications: failure to specify the exact statistical assumptions and failure to perform sample size analysis, missing data, dropped subjects and use of an intention to treat analysis and problems to point out the weaknesses of own study. This can be seen as ignoring the identification of study weaknesses and biasing the results. This is related for marketing biases in publications as well, but is not as important as first seven factors, as was concluded in the present study.

4. Conclusions

Research literature in the area suggests a fairly broad range of factors that in one way or another cause bias in marketing related publications. For the purpose of the study covered by the present paper, the authors ranked the factors determining the ones, which most frequently cause bias in marketing-related publications. The study concluded that sampling and sample frame errors are the factors most frequently causing bias in marketing related publications. It may be attributed to the fact that marketing is about revealing people’s preferences and improper selection of the sampling frame may discredit the main research question, not only trigger some doubts about robustness of the results. It should be noted that this is specifically related to the marketing area, and other disciplines in social sciences may show different rankings of the factors. Failure to specify the inclusion and exclusion criteria for researched subjects, which was ranked second, is very important to much broader context [122]. The third factor—non-responsiveness—once again is related to possible improper mirroring of researched population, which is very important in marketing research [123].
The least important factors were named missing data, dropped subjects and use of an intention to treat analysis and problems to point out the weaknesses of your own study. These bias-creating factors should be attributed not to the problems in research design, some methodologic weakness, but rather associated to ethics of the researcher. In general, research ethics is improving as novel instruments for assuring it is being implemented [124,125], so this research problem is of diminishing importance.
The findings of this study should be considered preliminary and treated as a trigger to start wider scientific discussions on bias inducing factors in research publications. It would be scientifically sound to conduct similar researches in other fields of social sciences in order to reveal common factors causing bias in research publications. The findings of the study are instrumental in creating universal recommendations helping to eradicate/mitigate effect of at least some of the factors creating bias in scientific literature.

Author Contributions

Conceptualization, M.M.; literature review, E.R.; methodology, M.M.; data curation, E.R.; validation L.G.; formal analysis L.D.; writing—original draft preparation, M.M. and E.R.; writing—review and editing, M.M., L.G. and L.D.; funding acquisition L.G. and L.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pannucci, C.J.; Wilkins, E.G. Identifying and Avoiding Bias in Research. Plast. Reconstr. Surg. 2011, 126, 619–625. [Google Scholar] [CrossRef] [PubMed]
  2. Althubaiti, A. Information bias in health research: Definition, pitfalls, and adjustment methods. J. Multidiscip. Healthc. 2016, 9, 211–217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Thiem, A.; Mkrtchyan, L.; Haesebrouck, T.; Sanchez, D. Algorithmic bias in social research: A meta-analysis. PLoS ONE 2020, 15, e0233625. [Google Scholar] [CrossRef] [PubMed]
  4. Munafo, M.R.; Nosek, B.A.; Bishop, D.V.M.; Button, K.S.; Chambers, C.D.; Percie du Sert, N. A Manifesto for Reproducible Science. Nat. Hum. Behav. 2017, 1, 21. [Google Scholar] [CrossRef] [Green Version]
  5. Bial, H. Guest editor’s introduction: Failing better. Theatre Top. 2018, 28, 61–62. [Google Scholar] [CrossRef] [Green Version]
  6. Fanelli, D. When East meets West.does bias increase? A preliminary study on South Korea, United States and other countries. In 8th International Conference on Webometrics, Informetrics and Scientometrics and 13th COLLNET Meeting; Ho-Nam, C., Hye-Sun, K., Kyung-Ran, N., Seon-Hee, L., Hye-Jin, K., Kretschmer, H., Eds.; KISTI: Seoul, Korea, 2012; pp. 47–48. [Google Scholar]
  7. Jamieson, L. Random and Systematic Bias in Population Oral Health Research: An introduction. Community Dent. Health 2020, 37, 83. [Google Scholar] [CrossRef]
  8. Russell, G.; Mandy, W.; Elliott, D.; White, R.; Pittwood, T.; Ford, T. Selection bias on intellectual ability in autism research: A cross-sectional review and meta-analysis. Mol. Autism 2019, 10, 9. [Google Scholar] [CrossRef]
  9. Stefl-Mabry, J.; Radlick, M.; Mersand, S.; Gulatee, Y. School Library Research: Publication Bias and the File Drawer Effect. J. Thought 2019, 53, 19–34. [Google Scholar]
  10. Song, F.; Parekh, S.; Hooper, L.; Loke, Y.K.; Ryder, J.; Sutton, A.J.; Hing, C.; Kwok, C.S.; Pang, C.; Harvey, I. Dissemination and publication of research findings: An updated review of related biases. Health Technol. Assess. 2010, 14, 1–12. [Google Scholar] [CrossRef]
  11. Chavalarias, D.; Ioannidis, J.P.A. Science mapping analysis characterizes 235 biases in biomedical research. Clin. Epidemiol. 2010, 63, 1205–1215. [Google Scholar] [CrossRef]
  12. Cook, G.B.; Therrien, J.W. Null Effects and Publication Bias in Special Education Research. Behav. Disord. 2017, 42, 149–158. [Google Scholar] [CrossRef]
  13. Button, S.K.; Bal, L.; Clark, A.; Shipley, T. Preventing the ends from justifying the means: Withholding results to address publication bias in peer-review. BMC Psychol. 2016, 4, 59. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Vella, F. Estimating models with sample selection bias: A survey. J. Hum. Resour. 1998, 33, 127–169. [Google Scholar] [CrossRef] [Green Version]
  15. Ayorinde, A.A.; Williams, I.; Mannion, R.; Song, F.; Skrybant, M.; Lilford, J.R.; Chen, F.Y. Assessment of publication bias and outcome reporting bias in systematic reviews of health services and delivery research: A meta-epidemiological study. PLoS ONE 2020, 15, e0227580. [Google Scholar] [CrossRef] [Green Version]
  16. Reio, T.G., Jr. Survey Nonresponse Bias in Social Science Research. New Horiz. Adult Educ. Hum. Resour. Dev. 2007, 21, 48–51. [Google Scholar] [CrossRef]
  17. Mulimani, P. Publication bias towards Western populations harms humanity. Nat. Hum. Behav. 2019, 3, 1026–1027. [Google Scholar] [CrossRef]
  18. Heidweiller-Schreurs, V. Publication bias may exist among prognostic accuracy studies of middle cerebral artery Doppler ultrasound. J. Clin. Epidemiol. 2019, 116, 1–8. [Google Scholar] [CrossRef]
  19. Shi, L.; Lin, L. The trim-and-fill method for publication bias: Practical guidelines and recommendations based on a large database of meta-analyses. Medicine 2019, 98, 23. [Google Scholar] [CrossRef]
  20. DeVito, N.J.; Goldacre, B. Catalogue of bias: Publication bias. BMJ Evid. Based Med. 2019, 24, 53–54. [Google Scholar] [CrossRef] [Green Version]
  21. Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 4691–4697. [Google Scholar]
  22. van Aert, R.C.M.; Wicherts, I.M.; van Assen, M.A.L.M. Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis. PLoS ONE 2019, 14, e0215052. [Google Scholar] [CrossRef] [Green Version]
  23. Lozano-Blasco, R.; Cortés-Pascual, A.; Latorre-Martinez, P.M. Being a cybervictim and a cyberbully—The duality of cyberbullying: A meta-analysis. Comput. Hum. Behav. 2020, 111. [Google Scholar] [CrossRef]
  24. Stefl-Mabry, J.; Radlick, M.S. School library research in the real world—What does it really take? In Proceedings of the International Association of School Librarians Conference Proceedings, Long Beach, CA, USA, 8 August 2017. [Google Scholar]
  25. Iwasaki, I.; Ma, X.; Mizobata, S. Corporate ownership and managerial turnover in China and Eastern Europe: A comparative meta-analysis. J. Econ. Bus. 2020. [Google Scholar] [CrossRef]
  26. Nelson, A.J. The power of stereotyping and confirmation bias to overwhelm accurate assessment: The case of economics, gender, and risk aversion. J. Econ. Methodol. 2014, 21, 211–231. [Google Scholar] [CrossRef]
  27. Linm, L. Bias caused by sampling error in meta-analysis with small sample sizes. PLoS ONE 2018, 13, e0204056. [Google Scholar] [CrossRef] [Green Version]
  28. Fanelli, D.; Ioannidis, J.P.A. US studies may overestimate effect sizes in softer research. Proc. Natl. Acad. Sci. USA 2013, 110, 15031–15036. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Groves, I.; Robert, M. Nonresponse Rates and Nonresponse Error in Household Surveys. Public Opin. Q. 2006, 70, 646–675. [Google Scholar] [CrossRef]
  30. Dehkordi, A. Effect of Bias in Contrast Agent Concentration Measurement on Estimated Pharmacokinetic Parameters in Brain Dynamic Contrast-Enhanced Magnetic Resonance Imaging Studies. Iran. J. Med Phys. 2020, 17, 142–152. [Google Scholar]
  31. Shu, D.; Yi, G.Y. Causal inference with measurement error in outcomes: Bias analysis and estimation methods. Stat. Methods Int. Med. Res. 2019, 28, 2049–2068. [Google Scholar] [CrossRef]
  32. Frenkel, R.; Farrance, I.; Badrick, T. Bias in analytical chemistry: A review of selected procedures for incorporating uncorrected bias into the expanded uncertainty of analytical measurements and a graphical method for evaluating the concordance of reference and test procedures. Clin. Chim. Acta 2019, 495, 129–138. [Google Scholar] [CrossRef]
  33. Handelsman, D.J.; Ly, L.P. An Accurate Substitution Method to Minimize Left ensoring Bias in Serum Steroid Measurements. Endocrinology 2019, 160, 2395–2400. [Google Scholar] [CrossRef]
  34. Bishara, A.J.; Hittner, J.B. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality. Educ. Psychol. Meas. 2015, 75, 785–804. [Google Scholar] [CrossRef]
  35. Charles, L.K.; Dattalo, V.P. Minimizing Social Desirability Bias in Measuring Sensitive Topics: The Use of Forgiving Effect of Bias in Contrast Agent Concentration Measurement on Estimated Pharmacokinetic Parameters in Brain Dynamic Contrast-Enhanced Magnetic Resonance Imaging Studies. J. Soc. Serv. Res. 2018, 44, 587–599. [Google Scholar]
  36. Schooler, J. Unpublished results hide the decline effect. Nature 2011, 470, 437. [Google Scholar] [CrossRef] [PubMed]
  37. Ioannidis, J.P.A. Why Most Published Research Findings Are False. PLoS Med. 2005, 2, e124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Pang, D.; Yang, L. psychological acceptance mechanism and influencing factors of scientific research educatio. Rev. Argent. Clín. Psicol. 2020, 2, 731–736. [Google Scholar]
  39. Martinson, B.C.; Crain, A.L.; Anderson, M.S.; De Vries, R. Institutions’ expectations for researchers’ self-funding, federal grant holding and private industry involvement: Manifold drivers of self-interest and researcher behavior. Acad. Med. 2009, 84, 1491–1499. [Google Scholar] [CrossRef]
  40. Qiu, J. Publish or perish in China. Nature 2010, 463, 142–143. [Google Scholar] [CrossRef]
  41. Lee, C.; Schrank, A. Incubating innovation or cultivating corruption? The developmental state and the life sciences in Asia. Soc. Forces 2010, 88, 1231–1255. [Google Scholar] [CrossRef]
  42. Lacetera, N.; Zirulia, L. The economics of scientific misconduct. J. Law Econ. Organ. 2011, 27, 568–603. [Google Scholar] [CrossRef]
  43. Fang, F.C.; Bennett, J.W.; Casadevall, A. Males are overrepresented among life science researchers committing scientific misconduct. mBio 2013, 4, e00640–e006412. [Google Scholar] [CrossRef] [Green Version]
  44. Kaatz, A.; Vogelman, P.N.; Carnes, M. Are men more likely than women to commit scientific misconduct? Maybe, maybe not. mBio 2013, 4, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Bailey, C.D. Psychopathy, Academic accountants’ attitudes toward unethical research practices, and publication success. Account. Rev. 2015, 90, 1307–1332. [Google Scholar] [CrossRef]
  46. Antes, A.L.; Brown, R.P.; Murphy, S.T. Personality and ethical decision-making in research: The role of perceptions of self and others. Empir. Res. Hum. Res. Ethics 2007, 2, 15–34. [Google Scholar] [CrossRef] [PubMed]
  47. MacKenzie, S.B.; Podsakoff, P.M. Common method bias in marketing: Causes, mechanisms, and procedural remedies. J. Retail. 2012, 88, 542–555. [Google Scholar] [CrossRef]
  48. Eisend, M.; Tarrahi, F. Meta-analysis selection bias in marketing research. Int. J. Res. Mark. 2014, 31, 317–326. [Google Scholar] [CrossRef]
  49. Zaefarian, G.; Kadile, V.; Henneberg, S.C.; Leischnig, A. Endogeneity bias in marketing research: Problem, causes and remedies. Ind. Mark. Manag. 2017, 65, 39–46. [Google Scholar] [CrossRef]
  50. Kakoschke, N.; Kemps, E.; Tiggemann, M. Approach bias modification training and consumption: A review of the literature. Addict. Behav. 2017, 64, 21–28. [Google Scholar] [CrossRef]
  51. Rosenthal, M.; Symoens, J.; De Brabander, M.; Goldstein, G. Immunoregulation with levamisole. Springer Semin. Immunopathol. 1979, 2, 49–68. [Google Scholar]
  52. Piotrowskj, C. Scholarly research on educational adaption of social media: Is there evidence of publication bias? Coll. Stud. J. 2015, 49, 447–451. [Google Scholar]
  53. Welner, G.K.; Molnar, A. Truthiness in Education. Educ. Week 2007, 26, 32–44. [Google Scholar]
  54. Gage, N.A.; Cook, G.B.; Reichow, B. Publication Bias in Special Education Meta-Analyses. Except. Child. 2017, 83, 428–445. [Google Scholar] [CrossRef]
  55. Makel, C.M.; Steenbergen-Hu, S.; Olszewski-Kubilius, P. What One Hundred Years of Research Says About the Effects of Ability Grouping and Acceleration on K–12 Students’ Academic Achievement: Findings of Two Second-Order Meta-Analyses. Rev. Educ. Res. 2016, 86, 849–899. [Google Scholar]
  56. Statzner, B.; Resh, H.V. Negative changes in the scientific publication process in ecology: Potential causes and consequences. Freshw. Biol. 2010. [Google Scholar] [CrossRef]
  57. Ekmekci, E. The Flipped Writing Classroom in Turkish EFL Context: A Comparative Study on a New Model. Turk. Online J. Distance Educ. 2017, 18, 151–167. [Google Scholar] [CrossRef]
  58. Ioannidis, J.P.A.; Trikalinos, T.A. Early extreme contradictory estimates may appear in published research: The Proteus phenomenon in molecular genetics research and randomized trials. J. Clin. Epidemiol. 2005, 58, 543–549. [Google Scholar] [CrossRef]
  59. Young, S.N. Bias in the research literature and conflict of interest: An issue for publishers, editors, reviewers and authors, and it is not just about the money. J. Psychiatry Neurosci. Jpn. 2009, 34, 412–417. [Google Scholar] [PubMed]
  60. Dwan, K.; Altman, D.G.; Arnaiz, J.A.; Bloom, J.; Chan, A.-W.; Cronin, E.; Decullier, E.; Easterbrook, P.J.; Von Elm, E.; Gamble, G.; et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 2008, 3, e3081. [Google Scholar] [CrossRef] [Green Version]
  61. Mlinaric, A.; Horvat, M.; Smolcic, S.V. Dealing with the positive publication bias: Why you should really publish your negative results. Biochem. Medica 2017, 27, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Francis, R. Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry; Stationary Office: London, UK, 2013. [Google Scholar]
  63. Jha, M.K.; Arnold, K.; Moriasi, N.D.; Gassman, P.W.; Abbaspour, K.C.; White, M.J.; Srinivasan, R.; Santhi, C.; Harmel, R.D.; van Griensven, A.; et al. SWAT: Model Use, Calibration and Validation. Trans. ASABE 2012, 55, 1491–1508. [Google Scholar]
  64. Owuamalam, K.C.; Rubin, M.; Spears, R. Addressing Evidential and Theoretical Inconsistencies in System-Justification Theory with a Social Identity Model of System Attitudes. Curr. Dir. Psychol. Sci. 2018. [Google Scholar] [CrossRef] [Green Version]
  65. Sterling, T.; Savarese, D.; Becker, D.J.; Dorband, J.; Ranawake, U.; Packer, V.C. BEOWULF: A Parallel Workstation for Scientific Computation. In Proceedings of the 1995 International Conference on Parallel Processing, Urbana-Champain, IL, USA, 14–18 August 1995; CRC Press: Urbana-Champain, IL, USA, 1995; Volume I: Archit, pp. 11–14. [Google Scholar]
  66. Davis, M.S.; Wester, K.L.; King, B. Narcissism, entitlement, and questionable research practices in counseling: A pilot study. Couns. Dev. 2008, 86, 200–210. [Google Scholar] [CrossRef]
  67. Laroche, P.; Soulez, S. La Méthodologie de la Méta-Analyse en Marketing Recherche et Applications en Marketing; Sage Publications, Ltd.: Thousand Oaks, CA, USA, 2012; Volume 27, pp. 79–105. [Google Scholar]
  68. Dickersin, K.; Min, C.M. Factors influencing publication results: Follow-up on applications submitted to two institutional review boards. JAMA 1991, 267, 374–378. [Google Scholar] [CrossRef]
  69. Song, Z.; Guan, B.; Bergman, A.; Nicholson, D.W.; Thornberry, N.A.; Peterson, E.P.; Steller, H. Biochemical and genetic interactions between Drosophila caspases and the proapoptotic genes rpr, hid, and grim. Mol. Cell. Biol. 2000, 20, 2907–2914. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Dickersin, K. Publication bias: Recognizing the problem, understanding its origins and scope, and preventing harm. In Publication Bias in Meta-Analysis—Prevention, Assessment and Adjustments; Rothstein, H.R., Ed.; John Wiley & Sons: New York, NY, USA, 2005; pp. 11–33. [Google Scholar]
  71. Greenhalgh, T.; Peacock, R. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: Audit of primary sources. Br. Med. J. 2005, 331, 1064–1065. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Polanin, J.R.; Tanner-Smith, E.E.; Hennessy, E.A. Estimating the difference between published and unpublished effect sizes a metareview. Rev. Educ. Res. 2016, 86, 207–236. [Google Scholar] [CrossRef]
  73. De Leeuw, E.; de Heer, W. Trends in Household Survey Nonresponse: A Longitudinal and International Comparison; Survey Nonresponse; Wiley: New York, NY, USA, 2002; pp. 41–54. [Google Scholar]
  74. Biemer, P.P. Nonresponse Bias and Measurement Bias in a Comparision of Face to Face and Telephone Interviewing. J. Off. Stat. 2001, 17, 295–320. [Google Scholar]
  75. Cannell, F.C.; Fowler, F.J. Comparision of a self-enumerative procedure and personal interview: A validity study. Public Opin. Q. 1963, 27, 250–264. [Google Scholar] [CrossRef]
  76. Muller, J.-L. Pour une revue quantitative de la littérature: Les méta-analyses. Psychol. Franç. 1988, 33, 295–303. [Google Scholar]
  77. Certo, S.T.; Busenbark, J.R.; Woo, H.S.; Semadeni, M. Sample selection bias and Heckman models in strategic management research. Strateg. Mag. 2016, 37, 2639–2657. [Google Scholar] [CrossRef]
  78. Anderson, S.F.; Ken, K.; Scott, E.M. Sample-Size Planning for More ccurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty. Psychol. Sci. 2017, 28, 1547–1562. [Google Scholar] [CrossRef] [Green Version]
  79. Rayer, S.; Smith, K.S. Population Projections by Age for Florida and its Counties: Assessing Accuracy and the Impact of Adjustments. Popul. Res. Policy Rev. 2014, 33, 747–770. [Google Scholar] [CrossRef]
  80. Tayman, J.; Smith, K.S.; Lin, J. Precision, bias, and uncertainty for state population forecasts: An exploratory analysis of time series models. Popul. Res. Policy Rev. 2007, 26, 347–369. [Google Scholar] [CrossRef]
  81. Alho, M.J.; Spencer, D.B. The Practical Specification of the Expected Error of Population Forecats. J. Off. Stat. 1997, 13, 203–225. [Google Scholar]
  82. Pflaumer, P. Forecasting US population totals with the Box-Jenkins Approach. Int. J. Forecast. 1992, 8, 329–338. [Google Scholar] [CrossRef]
  83. Keilman, N.; Pham, Q.D.; Hetland, A. Why population forecasts should be probabilistic—Illustrated by the case of Norway. Demogr. Res. 2002, 6, 409–454. [Google Scholar] [CrossRef] [Green Version]
  84. Sartori, E.A. An Estimator for Some Binary-Outcome Selection Models without Exclusion Restrictions. Political Analysis 2003, 11, 111–138. [Google Scholar] [CrossRef]
  85. Japec, L.; Lundquist, P. Bortfallet—Påverkas det av Intervjuarnas Attityder och Strategier? Rapport inédit; Statistics Sweden: Stockholm, Sweden, 2000. [Google Scholar]
  86. Curtin, C.; Presser, S.; Singer, E. The Effects of Response Rate Changes on the Index of Consumer Sentiment. Public Opin. Q. 2000, 64, 413–428. [Google Scholar] [CrossRef] [Green Version]
  87. Keeter, S.; Miller, C.; Kohut, A.; Groves, R.; Presser, S. Consequences of Reducing Nonresponse in a Large National Telephone Survey. Public Opin. Q. 2000, 64, 125–148. [Google Scholar] [CrossRef] [Green Version]
  88. Young, N.S.; Ioannidis, J.P.A.; Al-Ubaydli, O. Why current publication practices may distort science. PLoS Med. 2008, 5, e201. [Google Scholar] [CrossRef] [Green Version]
  89. Groves, M.R.; Presser, S.; Dipko, S. The Role of Topic Interest in Survey Participation Decisions. Public Opin. Q. 2004, 86, 2–31. [Google Scholar] [CrossRef] [Green Version]
  90. Taraday, M. Lack of Publication Bias in Intelligence and Working Memory Research: Reanalysis of Ackerman, Beier, Boyle, 2005. Stud. Psychol. 2019, 61, 203–212. [Google Scholar] [CrossRef]
  91. Wallach, J.D.; Boyack, K.W.; Ioannidis, J.P.A. Reproducible Research Practices, Transparency, and Open Access Data in the Biomedical Literature, 2015–2017. PLOS Biol. 2018, 16, e2006930. [Google Scholar] [CrossRef]
  92. Wind, Y.; Saaty, T.L. Marketing applications of the analytic hierarchy process. Manag. Sci. 1980, 26, 641–658. [Google Scholar] [CrossRef]
  93. John, K.L.; Loewenstein, G.; Prelec, D. Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychol. Sci. 2012, 23, 524–532. [Google Scholar] [CrossRef] [Green Version]
  94. Wickramasinghe, V.S.K.; Takano, S.E. Application of Combined SWOT and Analytic Hierarchy Process (AHP) for Tourism Revival Strategic Marketing Planning; Eastern Asia Society for Transportation Studies: Surabaya, Indonesia, 2009; Volume 7. [Google Scholar]
  95. Abedi, G.; Abedini, E. Prioritizing of marketing mix elements effects on patients’ tendency to the hospital using analytic hierarchy process. Int. J. Healthc. Manag. 2017, 10, 34–41. [Google Scholar] [CrossRef]
  96. Najmi, A.; Kanapathy, K.; Aziz, A.A. Prioritising factors influencing consumers’ reversing intention of e-waste using analytic hierarchy process. Int. J. Electron. Cust. Relatsh. Manag. 2019, 12, 58–74. [Google Scholar] [CrossRef]
  97. Wu, Y.; Chen, S.C.; Lin, I.C. Elucidating the impact of critical determinants on purchase decision in virtual reality products by Analytic Hierarchy Process approach. Virtual Real. 2019, 23, 187–195. [Google Scholar] [CrossRef]
  98. Gupta, S.; Dawar, V.; Goyal, A. Enhancing the placement value of professionally qualified students in marketing: An application of the analytic hierarchy process. Acad. Mark. Stud. J. 2018, 22, 1–10. [Google Scholar]
  99. Jing, Z.X.; Shi, J.H.; Luo, Z.Y.; Chen, D.P.; Chen, Z.Y. Comprehensive Evaluation of Electricity Market Based on Analytic Hierarchy Process and Evidential Reasoning Methods. In IOP Conference Series: Earth and Environmental Science; IOP Publishing: Bristol, UK, 2019; Volume 354, p. 012117. [Google Scholar]
  100. Shaverdi, M.; Heshmati, M.R.; Eskandaripour, E.; Tabar, A.A.A. Developing sustainable SCM evaluation model using fuzzy AHP in publishing industry. Procedia Comput. Sci. 2013, 17, 340–349. [Google Scholar] [CrossRef] [Green Version]
  101. Rostamy, A.A.A.; Shaverdi, M.; Ramezani, I. Green supply chain management evaluation in publishing industry based on fuzzy AHP approach. J. Logist. Manag. 2013, 2, 9–14. [Google Scholar]
  102. Diouf, M.; Kwak, C. Fuzzy AHP, DEA, and Managerial analysis for supplier selection and development; From the perspective of open innovation. Sustainability 2018, 10, 3779. [Google Scholar] [CrossRef] [Green Version]
  103. Owusu-Agyeman, Y.; Larbi-Siaw, O.; Brenya, B.; Anyidoho, A. An embedded fuzzy analytic hierarchy process for evaluating lecturers’ conceptions of teaching and learning. Stud. Educ. Eval. 2017, 55, 46–57. [Google Scholar] [CrossRef]
  104. Myeong, S.; Jung, Y.; Lee, E. A study on determinant factors in smart city development: An analytic hierarchy process analysis. Sustainability 2018, 10, 2606. [Google Scholar] [CrossRef] [Green Version]
  105. Mayo, F.L.; Taboada, E.B. Ranking factors affecting public transport mode choice of commuters in an urban city of a developing country using analytic hierarchy process: The case of Metro Cebu, Philippines. Transp. Res. Interdiscip. Perspect. 2020, 4, 100078. [Google Scholar] [CrossRef]
  106. Ma, D.; Zheng, X. 9/9-9/1 Scale Method of AHP. In Proceedings of the 2nd International Symposium on AHP, Pittsburgh, PA, USA, 11–14 August 1991; Volume 1, pp. 197–202. [Google Scholar]
  107. Ishizaka, A.; Balkenborg, D.; Kaplan, T. Influence of aggregation and measurement scale on ranking a compromise alternative in AHP. J. Oper. Res. Soc. 2010, 62, 700–710. [Google Scholar] [CrossRef] [Green Version]
  108. Harker, P.; Vargas, L. The Theory of Ratio Scale Estimation: Saaty’s Analytic Hierarchy Process. Manag. Sci. 1987, 33, 1383–1403. [Google Scholar] [CrossRef]
  109. Saaty, T.L.; Vargas, L.G. Models, Methods, Concepts Applications of the Analytic Hierarchy Process; Springer Science Business Media: New York, NY, USA, 2012; Volume 175. [Google Scholar]
  110. Libby, R.; Blashfield, R.K. Performance of a composite as a function of the number of judges. Organ. Behav. Hum. Perform. 1978, 21, 121–129. [Google Scholar] [CrossRef]
  111. Goepel, K.D. Comparison of judgment scales of the analytical hierarchy process—A new approach. Int. J. Inf. Technol. Decis. Mak. 2019, 18, 445–463. [Google Scholar] [CrossRef] [Green Version]
  112. Dong, Y.; Zhang, G.; Hong, W.C.; Xu, Y. Consensus models for AHP group decision making under row geometric mean prioritization method. Decis. Support Syst. 2010, 49, 281–289. [Google Scholar] [CrossRef]
  113. Saaty, T.L. Fundamentals of the analytic hierarchy process. In The Analytic Hierarchy Process in natural Resource and Environmental Decision Making; Springer: Dordrecht, Germany, 2001; pp. 15–35. [Google Scholar]
  114. Benedetti, R.; Andreano, A.S.; Piersimoni, F. Sample selection when a multivariate set of size measures is available. Stat. Methods Appl. 2019, 28, 1–25. [Google Scholar] [CrossRef]
  115. Patino, C.M.; Ferreira, J.C. Inclusion and exclusion criteria in research studies: Definitions and why they matter. J. Bras. De Pneumol. 2018, 44, 84. [Google Scholar] [CrossRef] [Green Version]
  116. Gray, J.R.; Grove, S.K.; Sutherland, S. Burns and Grove’s the Practice of Nursing Research: Appraisal, Synthesis, and Generation of Evidence, 8th ed.; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
  117. Beullens, K.; Loosveldt, G.; Vandenplas, C.; Stoop, I. Response Rates in the European Social Survey: Increasing, Decreasing, or a Matter of Fieldwork Efforts? Survey Methods: Insights from the Field. 2018. Available online: https://surveyinsights.org/?p=9673 (accessed on 16 July 2020).
  118. Langer, G. Probability versus non-probability methods. In The Palgrave Handbook of Survey Research; Vannette, D.L., Krosnick, J.A., Eds.; Springer: Cham, Switzerland, 2018; pp. 393–403. [Google Scholar]
  119. Franco, A.; Malhotra, N.; Simonovits, G. Publication bias in the social sciences: Unlocking the file drawer. Science 2014, 345, 1502–1505. [Google Scholar] [CrossRef] [PubMed]
  120. Peck, R.L.; D’Attoma, I.; Camillo, F.; Guo, G. A New Strategy for Reducing Selection Bias in Nonexperimental Evaluations, and the Case of How Public Assistance Receipt Affects Charitable Giving. Policy Stud. J. 2012, 40, 601–625. [Google Scholar] [CrossRef]
  121. Showalter, A.D.; Mullet, B.L. Sniffing Out the Secret Poison: Selection Bias in Educational Research. Mid-West. Educ. Res. 2017, 29, 207–234. [Google Scholar]
  122. Clark, G.T.; Mulligan, R. Fifteen common mistakes encountered in clinical research. J. Prosthodont. Res. 2011, 55, 1–6. [Google Scholar] [CrossRef] [Green Version]
  123. Churchill, G.A.; Iacobucci, D. Marketing Research: Methodological Foundations; Dryden Press: New York, NY, USA, 2006. [Google Scholar]
  124. Greenwood, M. Approving or improving research ethics in management journals. J. Bus. Ethics 2016, 137, 507–520. [Google Scholar] [CrossRef]
  125. Plemmons, D.K.; Baranski, E.N.; Harp, K.; Lo, D.D.; Soderberg, C.K.; Errington, T.M.; Esterling, K.M. A randomized trial of a lab-embedded discourse intervention to improve research ethics. Proc. Natl. Acad. Sci. USA 2020, 117, 1389–1394. [Google Scholar] [CrossRef] [Green Version]
Table 1. Research reliability indicators.
Table 1. Research reliability indicators.
Reliability Indicators
ScaleInverseLogarithmicPower
Lambda, λ8.4328.2568.331
Consistency Ratio, CR0.0190.0130.017
Consensus Index, CI, %68.283.574.7
Table 2. Values of obtained eigenvectors of bias influencing factors.
Table 2. Values of obtained eigenvectors of bias influencing factors.
ScaleInverseLogarithmicPower
Failure to examine and critically assess the prior literature0.1270.1110.118
Failure to specify the inclusion and exclusion criteria for researched subjects0.1420.1510.147
Failure to determine and report the error of measurement methods0.0710.0760.08
Failure to specify the exact statistical assumptions and failure to perform sample size analysis0.0650.0690.057
Improper population specification0.0970.1080.106
Sampling and sample frame errors0.170.1590.162
Selection errors0.1020.1090.113
Non-responsiveness0.1240.1130.119
Missing data, dropped subjects and use of an intention to treat analysis0.0620.0570.052
Problems to point out the weaknesses of your own study0.040.0470.046
Table 3. Ranks of researched factors.
Table 3. Ranks of researched factors.
Factors Causing Bias in Marketing Related PublicationsRank Obtained by Inverse ScaleRank Obtained by Logarithmic ScaleRank Obtained by Power ScaleFinal Rank
Sampling and sample frame errors1111
Failure to specify the inclusion and exclusion criteria for researched subjects2222
Non-responsiveness4333
Failure to examine and critically assess the prior literature3444
Selection errors5555
Improper population specification6666
Failure to determine and report the error of measurement methods7777
Failure to specify the exact statistical assumptions and failure to perform sample size analysis8888
Missing data, dropped subjects and use of an intention to treat analysis9999
Problems to point out the weaknesses of your own study10101010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Morkunas, M.; Rudienė, E.; Giriūnas, L.; Daučiūnienė, L. Assessment of Factors Causing Bias in Marketing- Related Publications. Publications 2020, 8, 45. https://0-doi-org.brum.beds.ac.uk/10.3390/publications8040045

AMA Style

Morkunas M, Rudienė E, Giriūnas L, Daučiūnienė L. Assessment of Factors Causing Bias in Marketing- Related Publications. Publications. 2020; 8(4):45. https://0-doi-org.brum.beds.ac.uk/10.3390/publications8040045

Chicago/Turabian Style

Morkunas, Mangirdas, Elzė Rudienė, Lukas Giriūnas, and Laura Daučiūnienė. 2020. "Assessment of Factors Causing Bias in Marketing- Related Publications" Publications 8, no. 4: 45. https://0-doi-org.brum.beds.ac.uk/10.3390/publications8040045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop