1. Introduction
In the last decades, social sciences fields such as sociology, political science, planning, and even architecture, have seen a rise in the popularity of experiments. The capacity to foster experimentation is argued to be one of the key characteristics of both behavioral economics and innovation policy mixes [
1]. By viewing experimentation as continuous growth, the process of iterative adaptation to new circumstances and experiences is believed to pragmatically entail a certain idea of progress and improvement [
2].
Perhaps one of the best known defenders of experimentation in policy sciences, Donald T. Campbell considered experiments, and more particularly randomization, to be the main pathway for scientific research and even an ideal for a better governance and Utopian society [
3].
Policy experimentation can be defined as “a purposeful and coordinated activity geared to producing novel policy options that are injected into official policymaking and then replicated on a larger scale, or even formally incorporated into national law” [
4]. Experimentalist governance is based on deliberation and generation of evidence, which was developed in response to command-and-control regulations. These were argued not to work in a contemporary world that experiences fast-paced changes and problems of implementing fixed rules on the ground [
5]. In contrast, experimentalist governance is increasingly considered an important driver for desirable societal transformation, and in the particular field of education, a catalyzer of innovation [
6].
More importantly, though, policy experimentation is an attempt to fill a gap in our knowledge of what works and what does not. As early as in the 1960s, Harold Lasswell asserted that experiments were an effective way to improve policy making practices, generate scientific knowledge, and build capacity to implement novel ways of doing policy. These purposes imply a certain level of learning and a subsequent translation into policy practices [
7]. In order for educational experimentation to work, education systems must adopt an attitude of constructive skepticism that acknowledges the risk inherent in any reform or experiment and allows them to transparently govern this process [
8,
9]. In this framework, HEIs represent what James Mahoney and Kathleen Thelen describe as “compromises or relatively durable though still contested settlements based on specific coalitional dynamics” [
10]. They are thus not only inscribed in the political and social structure, but are themselves not strictly independent of socio-political changes.
Digitalization cannot be studied while neglecting the context. In the French public higher education area, competition between institutions has been furthermore accentuated by international rankings and the establishment of evaluation agencies (e.g., the French Agency for the Evaluation of Research and Higher Education [AERES], the High Council for the Evaluation of Research and Higher Education [Hcéres], and EQUIS) [
11]. Higher education institutions have been urged to stand out by demonstrating their ability to innovate in both research and teaching. In particular, digital strategies appear to be a decisive lever for competitiveness available to higher education institutions. Whether it is to adapt training to the diversity of student populations, to increase the visibility of research and teaching activities, or to provide effective management tools, digital technology is bringing about profound changes in university policies. Thus, in such a competitive higher education system, it seems that institutions have no choice but to innovate.
Because the dominant conception of experiments in the policy sciences is that they are mainly a research method, the matter of governance and leadership have not garnered enough attention and have remained limited. Yet, such deep changes raise questions on the management process within a public French higher education institution, the current principles of which are still mostly based on the unity of time and place. When considering improvement, Gilbert developed the Behavior Engineering Model with the belief that the greatest barrier to worthy performance comes from a lack of information and support by management rather than an individual’s lack of desire to perform well.
Gilbert’s model focuses on two distinct factors of performance—the environment and the individual’s behaviors—which can be viewed from three perspectives—information, instrumentation, and motivation. Based on his understanding of technological improvement, Gilbert’s Behavior Engineering Model consists of three Leisurely Theorems that:
Distinguish between accomplishment and behavior to define worthy performance: worthy performance is characterized by a person’s behavior and accomplishments;
Identify methods for determining the potential for improvement (PIP), amounting to the ratio between typical performance and exemplary performance;
Describe six components of behavior that can be manipulated to change performance, among which are environmental components (data, resources, and incentives), as well as knowledge, capacity, and motives [
12].
Thus, the aim of this paper is to focus on motivations, participation incentives, and the ethical issues at hand during the institutional management of the technological and digital improvement process in the University of Paris 1 Panthéon-Sorbonne. This objective relies on the following observations that have been made.
First, studies show that emerging attention has been directed at the link between public governance aspects [
13] and user perspective [
14] since 2018. In the management, marketing, and IT fields, this focus on users through “customer-centric”, “human-centered”, “user-oriented” approaches has been expounded earlier by researchers (e.g., [
15,
16]) and practitioners alike [
17]. In these fields, user-centered design is often branded as a means to create highly usable and accessible products through an iterative design process in which users and their needs are involved in each phase of the design process. In user-centered design, designers use a mixture of investigative methods and tools (e.g., surveys and interviews) and generative ones (e.g., brainstorming) to develop an understanding of user needs. This allows a product to reach the standard of “external integrity” [
18], which refers to the match between the product and the intended user and involves managing knowledge.
Similarly, the operational success of an experiment implies that all stakeholders (school staff and parents, local and central authorities, communities, and of course the students themselves) play their parts at all the steps, including the evaluation, which should be performed with respect to criteria decided a priori [
8], whether it be “relieving [users] of frustration, of confusion, of a sense of helplessness [, making] them feel in control and empowered” [
15], or to human factors, user experience, or “usability” [
17].
The challenge of developing successful products, here effective policies and infrastructures that enhance the user’s learning experience and improves it, requires an interrelational approach across all the key disciplines, thus leading to a higher level of “collective creativity” [
19]. This aims to generate more creative solutions than those generated by less user-oriented approaches.
Second, only a few studies have been conducted on digitalization in universities. In 2021, a survey was conducted in Paris and its surrounding areas on the use of digital technology in universities and training centers. The objective of this exploratory study was to identify pre-existing digital uses in higher education institutions before the health crisis, then to analyze the solutions implemented during the lockdown and measure their impacts on students and teachers. A survey was also carried out on the sustainability of the changes once the crisis was over and the issues that were highlighted within the higher education community. A second phase of complementary qualitative interviews involved 40 teachers and students from different institutions and levels of training in order to gather their experience of distance learning. Overall, studies showed that while the health crisis has forced institutions to accelerate their transition, French higher education institutions had to rely on a massive and unprecedented deployment and use of digital technology [
20]. Six universities proposed actions related to digital or distance learning through international cooperation (Paris 1 Panthéon-Sorbonne, Sorbonne Nouvelle, and Sciences Po), geographic dispersion of sites (Paris-Saclay), professional platforms (Ensam), and educational digital third parties (Evry).
In the particular case of Paris 1, although increasing attention has been given politically to digitalization in the last two years, data on Paris 1 members’ perception of digitalization remain relatively lacking. Only one examination of digital use in Paris 1, which remains internal, has compared the first lockdown and the second one, showing that the student experience was not similar to that of the teachers. An explosion of use by teachers can be observed at the beginning of March 2020, while students do not follow the same acceleration curve at all.
These studies also lacked data on the particular populations of students. The aforementioned study conducted in the Parisian region only included eight student experiences, among which there was only one student of Université Paris 1 Panthéon-Sorbonne. When an informal exchange forum was conducted in Paris 1, only two students came forward, reporting the same general sentiment that was reported in other institutions. On an institutional level, surveys conducted by the ORIVE of Paris 1 only concerned three questions related to digitalization prior to the health crisis: first, student perspectives on teachers’ available equipment (50% satisfied or very satisfied), second, the desire to have online or blended courses (around 40% a categoric “no”), and third, the format used to transcribe classes (more than 60% digital format) (ORIVE, 2018–2019 data, Survey on Study conditions for bachelor and master student). In 2021, a thematic survey was conducted with all the students enrolled in a Master 2 during the academic year 2020–2021, which was marked by the generalized use of distance learning. This excluded distance learning, off-site training abroad, mobilities, online law school, Administration institutes, and preparatory classes. A total of 27% of the students responded to the survey.
However, as the survey took place in March–April 2021, the questions only covered assessments that took place before April 2021. By this date, 89% of students reported taking distance assessments during the academic year. Furthermore, the survey focused on one certain population and a certain time.
In a particular context where this institution was not accustomed to distance learning, all levels of the university were overwhelmed and had to make do with the means at hand. However, while the health crisis has made it possible to reexamine the teaching and pedagogy provided by teachers in France [
20], the digital transformation was not necessarily inscribed in strategic policies. Difficulties experienced in achieving a complete and sustainable transformation have not yet been resolved, mostly and undoubtedly due to the lack of a global policy for digital transformation in certain institutions, with very different situations from one university to another.
The situation of urgency and uncertainty that characterized the COVID-19 pandemic was therefore not necessarily compatible with the implementation of real innovative projects [
21]. The renewal of pedagogical practices often remains at the stage of reflection for the moment in spite of a few experiments by teachers. The adaptation of courses most often resulted in a simple transposition of the courses given in class to digital media. The COVID-19 pandemic has, for many students, reinforced a precariousness with multiple origins (cultural, economic, family, social), increasing the vulnerability of some of them.
Therefore, looking at the future of higher education from an adaptive perspective requires a better understanding of where and how students learn, since some learning activities may take place off campus. This also fits with the Vantage Points model developed by MGTaylor Corporation, which acts as a “slice of reality” serving to diagnose possible influences on behavior and identify strategies for performance improvement. Through glyphs, a certain spatial arrangement, and connections through different components, the Vantage Points models assert that “you can never understand the philosophy of a system or enterprise until you are immersed in the tasks that comprise its daily functions. The task provides a mental elevation from which the whole essence of the system can be contemplated.”
Third, there is much to be learnt from the process in itself. It is no coincidence that such surveys and studies have seen a rise in number since the COVID-19 pandemic [
22]. Previous literature suggests that “institutional settings define the degree and form of experimentation that is deemed legitimate” [
23], implying that experiments are bound by institutional rules. In other terms, “experiments are infused with political ideas and […] often confirm existing ideas rather than challenge them [
24]. An ‘informal’ policy experiment [
2] can be derived from that, exceeding formal evaluation-based learning, and creating an informal cognitive and normative learning that can influence further institutional changes [
25]. Since learning occurs through trial and error, Popper emphasises the importance of being able to identify the causes of success or failure of a change.
Our study builds on those observations and refers to the outputs emerging from two participative experiments conducted in the French University of Paris 1 Panthéon-Sorbonne in the framework of the OpenU project. The processes explored in this paper fall into the gradual reforms approach, i.e., the necessity of small changes in order to more clearly identify the effects of the intervention and bring stakeholders on board.
In this paper, we report the results of those two rounds of experiments investigating the extent to which the knowledge generated in the higher education community can succeed in being pertinent when aiming to orient policies in said field. In each of the two experiments, which were conducted in spring 2022, members of the academic community of Paris 1 Panthéon-Sorbonne did not update their beliefs on digitalization when presented with the opportunity to run counter to their predispositions.
2. Materials and Methods
All figures and data used in this study stem from two participative experiments conducted with members of the University of Paris 1 Panthéon-Sorbonne University between January 2022 and September 2022.
2.1. Specification of Context, Population, and Field of Study
Both authors are themselves members of the University of Paris 1 Panthéon-Sorbonne, which facilitated their access to the field and allowed them to make full use of the necessary internal services and actors. The University of Paris 1 Panthéon-Sorbonne is also one the academic partners of the OpenU project within the framework of which lies this research. The choice of this university appeared to be particularly pertinent in the context of the stated policy experimentation process.
Paris 1 Panthéon-Sorbonne is neither the only French university in the project nor the largest one. However, it is relatively representative of public universities in the French higher education system. The university gathers around 45,000 students and 2500 professional members. It reflects the LMD system, which most European countries have adopted in an effort to promote coherence across borders, but also reflects the antagonistic nature between public entities and more selective ones (“Grandes écoles”) [
26]. In the year 2021–2022, around 2950 students were parallelly enrolled in an external two-year preparatory course (cours préparatoires or prépas). Those students do not attend classes in the university but remain registered as a “back-up plan” should they not be admitted into more competitive institutions.
At the same time, Paris 1 Panthéon-Sorbonne is a social-sciences-only university revolving around three disciplines—Economics and Management, Arts and Humanities, and Law and Political Sciences—and as such is one of the largest universities of humanities and social sciences in France. This exclusive nature is particularly interesting here as this paper’s discussions include the acceptance, adoption, and use of technologies [
27], which lies at the heart of the expertise of the university.
Its campus, which is famously based in the capital city, is characterized by its scattered nature. The University of Paris 1 Panthéon-Sorbonne is located on 25 sites in Paris and the Ile-de-France region, with more than 1500 students being enrolled in non-LMD (capacités, university diplomas, DAEU, etc.). Its research departments are structured around three major disciplinary poles with 36 research teams, including 23 UMRs under joint supervision with the CNRS or IRD and 13 host teams, as well as 10 doctoral schools.
The university also lies at the heart of a network of international relations covering five continents. More than 670 foreign students were registered in the university in the academic year 2021–2022. This is facilitated by the University of Paris 1 Panthéon-Sorbonne’s choice since 2020 to continue applying the same registration fees to French and foreign students, whether they are of intra- or extra-European origin. Beyond mobility, more than 1100 students are enrolled in off-campus courses in nine countries abroad, while 800 students are either in joint degrees or in double degrees [
28].
This large, yet exclusively humanities-oriented panel, therefore provides an appropriate field in which to conduct qualitative and reflexive experimentation on the role of community members in policy making.
In the process, we address the following questions:
Research Question 1. (RQ1)—What are the current barriers to the digital turn, as seen by non-strategic members of the community?
Research Question 2. (RQ2)—How to implement an inclusive user-oriented participative approach in the digitalization of university, i.e., how to ensure participation, and adherence?
Research Question 3 (RQ3)—Which ethical issues are at play when building policies based on such approaches?
Specifically, we have six principal hypotheses about how the effectiveness of policy experimentation will vary. The assumptions guiding the paper were as follows:
H1. There is a lack of information leading towards a lack of acceptance.
H2. The aforementioned varies in accordance with social factors, including marginalization.
H3. A more participative process is requested by concerned parties.
H4. When existent, participative processes remain quite lethargic due to a lack of incentive.
H5. Participative processes only concern and include those who are favourable to the topic of digital education.
H6. Participative processes only concern and include those who are interested in the topic of digital education.
In order to bypass possible bias stemming from such a close relationship, the choice was made to use different sampling techniques when addressing the target groups.
Non-probability sampling techniques were useful in this exploratory and qualitative phase of the study, as the aim was not to test a hypothesis about a broad population, but to develop an initial understanding of a small population. After a limited voluntary response (discussed below), a more judgemental sampling was implemented to consume less time and select a committed and diverse sample for conducting the focus group. Such diversity did not only concern different bodies (students, administrative body, and teachers) but also clusters such as gender, nationality, associative engagement, and interest/position vis-à-vis digital technologies. Encapsulating such a diversity required an extensive knowledge of those involved to select a sample that is most useful to the purposes of the research and gather a varied range of data on their experiences.
The second experimentation phase first built on this sampling, combining a voluntary response (in which already existing students were free to volunteer or not, while a call for contribution was also disseminated) together with a snowball sampling where students recruited other potential participants with similar characteristics. However, and in order to extend the representativity of the study, the second part of this phase, which consisted of a survey, relied on a voluntary response. The survey was sent out to all 45,000 students. Sampling here was therefore random. All the same, it is still not possible to speak of a probabilistic sample, as some respondents were at least somewhat inherently more likely to volunteer than others, whether it be due to their availability, to their interest, or to their acceptance of using online technologies to respond. Such circumstances were taken into consideration, and are further discussed in this paper.
2.2. Data Collection in Two Phases
2.2.1. First Experimentation Phase
The first experiment (hereafter Phase 1) lasted between January 2022 and April 2022.
The purpose of this phase was to examine the expectations of users of the EU universities in digital times and report on outputs stemming from the creation of an engaging and inclusive imaginative process for members of the community where ideas are received, deepened, and put into use. This process aimed at wholly involving diverse university members and thus strengthening their perception of themselves as key players in their universities. At the same time, it aimed to extend the network of the OpenU project within the university while engaging members in marginal discussions that would feed into the institutional level and current interrogations within policy spheres and EUAs. While working on expectations and imagination, mediation was used to lead the collective work toward the definition of changes in digital policies necessary to meet and satisfy the expectations and hopes.
To that end, a series of focus group working sessions were held between members of the Paris 1 community, as shown in
Figure 1. The primary aim of the focus group working sessions was to assess and identify expectations of users of the EU universities in digital times. Collective work was led to define changes in digital policies necessary to meet and satisfy the expectations and hopes. A certified professional facilitator was recruited to chair the meetings, train the trainers, and ensure impartiality.
Such sessions heavily relied on the philosophy of focusing on the real needs of users. It also built on inclusion and the necessity of leaving room for plurality, as it intends to sensibly welcome differences (of means, perspectives, priorities, etc.) between the present/future of some and the present/future of others. The focus groups were based on an adapted Design Sprint process in that it worked on the basis of a rapid approach, allowing participants to understand, analyse, decide, imagine, and test thanks to having user feedback within an imposed time constraint and in the absence of iterations. While the process did not limit itself to five consecutive days in order to adapt to the community’s calendar, its priority remained to limit the risks and uncertainties linked to innovation. Beyond reaching an immediate result, it moreover stimulated creativity, improved the credibility, engaged diverse views, and generated a strong motivation and a training of the concerned sample.
This process proved to be well adapted to the process as it clearly explained who does what, when, how, and with whom, yet without interfering in the collaborative multidisciplinary innovation method that is at the heart of the focus group.
Through a call for participation, 12 members were identified from the three bodies of the academic community within the university (students, teachers, administrative staff). This group was diverse as it included both members familiar and unfamiliar with digital tools, teachers who are skeptical of them, foreign students, undergraduate students, PhD candidates, alumni, administrative staff, and involved members in Una Europa. Participants were to live the experience of a deep sharing in the thinking process, which is necessary to foster a belonging feeling in participants and trigger deep involvement. They should be able to share hopes, expectations, and dreams about the digital university and the capacity of change of the university that would allow for better experiences as students, teachers, and administrative staff.
The steering group monitored and approved the quality and ethics of the expected project results against the progress indicators and the key question it sets at the beginning of the project.
By the end of this project, it was expected that participants would have engaged in an imaginative process that they found relevant to the OpenU project, but also that they would feel their contributions had been received with openness and had contributed to the final outcome and that their results can feed into the downstream discussions and steps within OpenU.
Personal data was limited to what was necessary for the purposes of processing the data: name, status in the university, and discipline.
However, a privacy statement was drawn up concerning the personal data, which was all the more necessary considering the trust environment that was built. For this reason, the confidentiality of all participants’ information shared in this focus group was respected.
Figure 1.
Structure of the Experimentation Phase 1 (designed by © Kumquat).
Figure 1.
Structure of the Experimentation Phase 1 (designed by © Kumquat).
Two overarching questions were designed by participants in the experiment and guided the experimentation: How are we to consider the digital university and what does it look like? What are its challenges and limits? What are the hopes, fears, and priorities regarding the use of digital technology in order to ensure an optimal implementation of education, research, and study?
Such a qualitative method made it possible to give control to the participant, within well-defined limits, while giving them a certain liberty in the used discourse. The participants take a direct part in the production of knowledge, exercise immediate control over the work and conclusions of the researchers, and directly link the analyses to a social praxis that is quite different in nature from scientific experimentation as it is conventionally conceived. It is important to note that this is not contradictory to the research register because it is still a matter of taking the initiative, ensuring the methodological conduct, and assuming responsibility for the conclusions.
The role of iteration here allows “a loop-like pattern of multiple rounds of revisiting the data as additional questions emerge, new connections are unearthed, and more complex formulations develop along with a deepening understanding of the material” [
29]. It is less a question of asking good questions than of asking good questions of oneself, so that they emerge naturally in the course of the interview, which is conceived as a true interlocution [
30]. Such a question is therefore to be linked to a deeply reflexive process, the key to sparking insight, developing meaning, and progressively leading to refined focus and understandings.
It is also noteworthy that these two questions themselves stem from a number of diverse questions that came out of a collective discussion, such as:
What does the ideal digital university look like?
How does digital use correspond or diverge from the missions of the university?
How does the digital university raise concerns?
How does digitalization of the university have an impact on your daily life and work?
What are your expectations with regards to security and privacy of digital universities (and why?)
Where to start?
Does digitalization improve pedagogical content?
How does fragmentation and interoperability of digital services affect your work?
What did you learn from your experiences with the digital university during the pandemic? (positives and negatives, necessary/desirable/unwanted things)
Is digitalization necessary? It is inevitable?
Can links be established between the digital university and society at large?
What institutional guarantees would you like for the digital university? (diploma, ECTS, transferable credits to your home university)
How to overcome the language barrier?
How can digital functionalities support mobility?
Which changes would be necessary for the coordination and delivery of this new pedagogy?
Objectives were therefore to sufficiently engage members of the experimentation collectively so that they would provide a reply to those questions.
2.2.2. Second Experimentation Phase
The second experiment (hereafter Phase 2) started in May 2022 and ended in September 2022. One of the main targets of this phase was to build and conduct a survey in the student population of the university on digitalization. The main principle guiding this phase was the inclusion of students from the beginning of the experimentation as main actors in the EU HEIs, which implies a high level of inclusion for students in the HEI’s teams. Students were involved in the project from the beginning and were called to lead and conduct the experimentation, and at the same time policy making, inside the project. Students were also naturally targeted by the questionnaire and were involved in the analysis of the study.
Students took up the question of digital pedagogies in order to design a survey that reflected their needs, fears, and expectations. Special attention was given to the diversity of participants. The steering committee was made up of six students from different fields (informatics, arts, economy, history), different levels (bachelor, master, doctorate), and different profiles (CPGE before university, foreign, L1 to L3 at university, reorientation during the course, etc.). The steering committee was involved in (1) the survey design, (2) the launch of the questionnaire, (3) joint meetings with JU students, (4) the interpretation of the results, and (5) the communication of the results.
The objectives of the survey, as decided by the students participating in the Steering Committee, were to be informed of students’ needs and expectations (both in times of crisis and outside of these periods) to facilitate the current reflections. The survey was therefore a way to become actors of change, and to actively participate in the evolution of our university, and to potentially be heard. In particular, the survey aimed to examine the strength and opportunities of digital in the university, to gather feedback on past experiences with digital in the university, and to entertain possible prospects for improvement of the digital system.
It was decided that there would be no limit to the number of students. The exclusion of courses outside the Learning and Research Units (IAE, ISST, EDS-IED, CIPCEA, DEVE-PSC) or Erasmus courses, exchange programs, delocalized courses, CPGE, etc., was perceived as a potential limit to the diversity of responses. The steering group felt that all students enrolled at Paris 1 would be concerned and that those involved in mobility and exchange programs could have interesting opinions, even if it was admitted that there was a strong chance that they would not respond. The survey therefore concerned all 47,318 students with a first question concerning the profile of the respondent in order to identify particular sources and contexts.
The questionnaire consisted of thirty-two questions, including four open-ended questions, covering five different aspects:
Identification of the respondent.
Distance learning conditions.
Students’ experiences with digital technology at the university.
Preferences of the student for improvement of the digital system currently in place.
Students’ preferences with respect to communicative and informative measures in the digital university.
Respondents to the survey were informed of the context and purpose of data collection. The information collected in this survey was processed by computer, and the data concerning students were used in a strictly anonymous way. The recipient of the data collected is the University of Paris 1 Panthéon-Sorbonne which will establish indicators.
The survey was launched online on the BLOOM platform using the LimeSurvey tool on 17 June 2022 (LimeSurvey Community Edition Version 5.2.14+220214, LimeSurvey GmbH, Hamburg, Germany) An email reminder was sent on 28 June 2022 and the survey was closed on 7 July 2022. A total of 304 students responded. The survey was optional and anonymous, and the data collected were used strictly anonymously. In accordance with the French Data Protection Act of 6 January 1978 and the RGPD, students had the right to access and rectify data concerning them.
In both of these phases, the aim of such narrow but well-identified experiments was not to replicate the current disposition but rather to challenge it. Here, policy experimentation is designed as a means to possibly reach out to actors who have normally been excluded from public governance processes. Therefore, the valuing of innovation would not be specific to that publicly expressed by the education system, but would be directly linked to social representations of innovation [
31].
Notably, both the two phases put forward a participatory approach, designed to actively involve the target population, i.e., students. While participatory evaluation processes are sometimes deemed less “scientific” and “objective” than more traditional processes, they allow stakeholders to take ownership of the results of their actions and to make them evolve according to the conclusions they have reached. This is all the more necessary as evaluation means projecting a system of values (a frame of reference) and expressing a particular point of view on the action, so it is important to encourage the expression of a diversity of points of view on public action so that the social legitimacy of the evaluation is as broad as possible [
32].
Participatory evaluation was deemed to offer greater external validity to the evaluation exercise, because it is discussed by concerned stakeholders, encouraging the expression of a diversity of viewpoints. The evaluative judgment is thus constructed from a multiplicity of informed opinions. The participation of stakeholders in the evaluation exercise is then seen as a guarantee that societal concerns will be better considered in the objectives of future projects, which gives these projects greater external legitimacy [
33]. By organizing the exchange of points of view, participatory evaluation allows the evaluation process to be an exercise in the co-construction of public action. The confrontation of one’s point of view with that of others, the better understanding of the motivations of the other stakeholders, and the identification of points of convergence and areas of irreducible disagreement between actors will enable progress to be made in the collective construction of the decision-making problem. In a way, it is a question of betting on collective intelligence and mobilizing the energy resulting from differences to channel it towards the creation of something that has never been created before.
This evaluation, by seeking to give voice to those traditionally excluded from public debate—particularly the most disadvantaged groups—aims to broaden and enrich public debate. There is here an emancipatory purpose expected of participatory evaluation [
32]. If the citizen is involved, this breaks down his or her feeling of apathy, isolation, and powerlessness.
One of the longer-term aims was that results of the evaluation will have all the more chance of being used if the students have participated in the different stages of the evaluation process and they therefore better assimilated the analyses and results of the evaluation. In addition, the more they have contributed to the evaluation process, the more likely it is that they will agree with the evaluation findings. It is therefore hoped that the recommendations will be easier to implement and that there will be fewer obstacles to the solutions adopted [
34].
Among the participatory evaluation methods, the choice was made to resort to the accompanied self-evaluation. This was viewed as being one of the most complete in that context because (1) all the participants in the implementation of the project are the actors of the evaluation, from the definition of the objectives to the conclusions, and (2) the methodological and institutional support is provided by an external facilitator, either the ORIVE or the IAF, who brings their competence and the necessary distance from the project to learn and evolve. Accompanied self-evaluation allows the actors to retrace together the path they have taken and to have a long-term vision of what they want to pursue: to find the major stages of the project, to see how the objectives have been implemented, how they have evolved and why, and to identify the obstacles and the resources of which they were not necessarily aware This is an exercise that requires a special kind of perspective and questioning. Moreover, the evaluation can lead to questioning that is difficult for the group to assume if there is no one to regulate this process.
2.3. Data Analysis
Ultimately, by September 2022, material used for the analysis presented in this paper consisted of the following material milestones and outputs:
- (1)
Results of benchmark evaluations as collected at the start of the first phase from participants of focus groups and steering committee meetings.
- (2)
Material outputs stemming from working sessions of the first phase—here, written recommendations intended for strategic, institutional, and political levels, as well as a tool kit for facilitating interactive and inclusive pedagogies and decision processes.
- (3)
The final survey resulting from exchanges and meetings with the Student Steering Committee during the second phase.
- (4)
Data collected through the survey from 304 students of the University Paris 1 Panthéon-Sorbonne.
- (5)
Participatory rapid appraisals and observations emerging from working sessions and meetings of both phases, and collected using a collaborative and overarching log book.
Each of the two experiments used both qualitative and quantitative methods to analyze its data.
In Phase 1, data was collected during focus groups and during pre- and post-session surveys.
Through participant observation, text analysis was implemented in order to gather sentiment information. This technique allows the intentions and emotions of discourse to be understood and monitored, whether it is positive, negative, or neutral, and to be analyzed depending on certain factors. In order to maximize realism, classifications were only identified through participative observation, rather than being purposely triggered. Special attention was given to discourse-generated model practices and argumentative and/or legitimating elements.
Furthermore, analyzing the argumentative, dramatizing, and evaluative statements, causal relations (cause–effect), and their link to responsibilities, problem dimensions, value implications, moral and aesthetic judgments, consequences, possible courses of action, and others, allowed building a case towards action generating schemes and frames, which were reflected by further outputs.
Given the focus on discourse analysis, it was crucial to use qualitative methods, in order to yield findings that reflected the participants’ perspectives, experiences, and emotions on a topic that, although daily, remains related to the private spheres.
This was also complemented with a regression analysis using historical data (pre- vs. post-) to understand how a dependent variable’s value is affected. This analysis method was used in analyzing the survey’s responses. The same questions were asked both in an initial questionnaire before the project started and in an evaluation questionnaire after the project had taken place, concerning knowledge of digital university services, opportunities to access and assess digital services, satisfaction and participation, and perspective on participative process and channels. Following this, the results from both questionnaires were evaluated to demonstrate the evolution in opinions over the course of the project. A comparison diagram shows the results from the initial questionnaire and the results from the evaluation questionnaire on the right. This data was also cross-referenced with outputs stemming from the aforementioned text analysis, in order to monitor changes that occurred throughout the project.
Similarly, Phase 2 required both a qualitative participant observation and a more quantitative analysis. By joining and participating in student discussions, we simultaneously observed and documented interactions while noting invaluable information on the topics which subjects would be reluctant to talk about during interviews, because they are considered as obvious until discordance arises. Thus, thematic analysis was conducted. This involved noting the shared data before identifying and reviewing five main themes: perception of digital tools, perception of digital practices, perception of digital use, perception of institution, and perception of students. Each theme was examined to gain an understanding of participants’ perceptions and motivations.
For students who had already participated in Phase 1, inference served to examine whether there were significant differences between the two phases. The main effects of each phase were identified, and comparisons or contrasts were established to determine between which conditions a difference was observed.
Additionally, analysis of the survey was also conducted by a student in line with the overall phase’s structure. Before quantitative analysis, the gathered data were prepared. The dataset was checked for missing data and outliers. The responses obtained in the 28 questions were entered into the computer and started with a presentation of the data, a “flat sort”. This was done in the form of a table (all the data were included, which is appropriate in the case of small numbers), or in the form of a graph, to give a synthetic view of the data, a general trend.
After the simple presentation of the data, the hypotheses for analyzing and understanding the responses were discussed together with the ORIVE and the research team, question by question or criterion by criterion. This involved comparing the data with each other (from questionnaires and interviews) or with the existing review of literature. Some questions were cross-referenced to identify links. These cross-references and cross-sortings were presented in the form of a graph with the mention of a possible statistical link if the data were representative. A breakdown of the results by field of study was carried out in order to extract potential similarities between the fields of study, in accordance with the one predefined by Université Paris 1 Panthéon-Sorbonne. Five sheets grouping the students by discipline were produced: Art; Law–Political Science; Economics–Business–Mathematics; Geography–History–Art–Philosophy; and Institutes (IAES-IDUP-IEDES-IREST). This breakdown implied that students enrolled in a double degree program in two different fields of study were considered in each field.
Overall, the results were expressed as percentages in the graphs, and as numbers and percentages in the tables, except for the sheet concerning art students, where they were only expressed in headcount due to the small number of students (seven students). As the number of respondents was small, the results must be interpreted with caution and the percentages are given as an indication. In addition, due to rounding, some totals may be less than or greater than 100%.
For the multiple-choice questions, the results were presented in a similar way to the single choice questions.
As for open-ended questions, a content analysis was carried out where description and data analysis are presented together. An initial analysis of each question was carried out, reviewing consistency of responses, possible contradictions, and statements that directly or indirectly identify the respondent. This was followed by a cross-sectional analysis question by question. The report classically presents a question-by-question analysis, or more frequently a criterion-by-criterion analysis of all the interviews, illustrated by excerpts (with anonymity of the respondents). The aim is to identify the homogeneity or, on the contrary, the diversity of points of view.
The target’s perception of each organizing dimension was measured. The variables whose factor loadings were concentrated on the same factor were grouped. This also served to uncover variables, which allowed streamlining specific segments.
Since data analysis and interpretation can be influenced by the personality and culture of the evaluator, having unbiased data collected in a neutral and fair manner ensures validity and reliability.
In both phases, neutrality and impartiality were considered while constituting an evaluation team that would overcome biases in data collection, and also in the analysis of the results obtained for better objectivity in scaling up or replication in another context. While in the first phase an impartial facilitator was recruited, in the second phase the student was assigned by the ORIVE service due to their reliability. This, therefore, guaranteed administrative and political independence.
In parallel, and due to the mixed-methods situation, exclusively relying on a neutral and impartial team would lead to a situation of non-in-depth or superficial analysis of cases or phenomena during the process. Both independent entities were supervised by institutional components in the shape of the International Affairs Department and the ORIVE. The ORIVE furthermore declared the survey to the data protection officer, as it was not implemented through internal services but through the OpenU project and its platform. Most importantly, the information collection stage required use of prior knowledge and expertise in the field to better understand the responses and collect information. The interpretation therefore relied on regular and open peer-discussions with analysts and was finalized by the authors.