1. Introduction
The research reported in this paper aims to develop a quantitative method for measuring learner-centred education (LCE) within studio crit interactions. The method is then applied in a natural experiment to demonstrate its utility for assessing LCE in studio-based learning. Foundational to Vygotsky’s (Reference Vygotsky, Gauvain and Cole1978) sociocultural theory of cognitive development, Bloom’s revised taxonomy (Krathwohl Reference Krathwohl2002), and situated learning theory (Lave & Wenger Reference Lave and Wenger1991), LCE puts the learner as the main factor responsible for the learning process (Bremner Reference Bremner2021). The learning process is situated within the educational, physical and social context. This includes the interaction with tutors, peers, and assistive tools or media (Brown, Collins, & Duguid Reference Brown, Collins and Duguid1989; Yeoman & Wilson Reference Yeoman and Wilson2019). Loosely defined, LCE includes the learner’s engagement and active practice of relevant behaviours or skills to be gained while handling an educational task, all fostered by the tutor’s formative feedback (Bremner Reference Bremner2021). Due to this loose definition, there is a lack of methods to measure LCE quantitatively, making comparative studies difficult (Bremner, Sakata, & Cameron Reference Bremner, Sakata and Cameron2022). Hence, there are limited quantitative results about the cognitive impact of LCE and its implementation (Schweisfurth Reference Schweisfurth2015). This gap in knowledge restricts assessing the effectiveness of the educational context to enhance LCE in constructivist learning, such as design studios.
The design studio is the principal environment for learning how to design. Rooted in the apprenticeship model and constructivist learning theories, the studio provides a simulated environment of the professional design process (Schön Reference Schön1985; Cuff Reference Cuff1992). The studio’s intended learning objectives (ILOs) are the design problem-solving behaviours comprising the design process. Students gain knowledge in problem-solving by coping with real-world challenges, such as designing a dwelling or a leisure centre, requiring students to consider structural, social and functional issues throughout the process. During the course, which commonly lasts a semester or two, they interact with expert tutors in design sessions termed “critiques” or “crits” in short, to receive formative feedback, critiquing the work done and advising further progress (Megahed Reference Megahed2018). Crit interactions are the studio’s core educational setting used for learning the process of design. Learning occurs through actively mimicking and adopting professional design behaviours generated by tutors and peers within this educational context. These behaviours include introducing design issues that concern the design problem or its parts, re-using and transforming them until arriving at a satisfactory solution (Schön Reference Schön1987). In accordance with the primacy of interaction in cognitive development (Vygotsky Reference Vygotsky, Gauvain and Cole1978), the tutor’s generation of these behaviours during the interaction has a significant role in supporting the learner’s gaining of these intended learning behaviours.
The studio commonly uses different media for creating or representing design artefacts, developing and communicating them to self and others (Schön & Wiggins Reference Schön and Wiggins1992). Studies show the important role that representations play in stimulating tutor–student interaction (Goldschmidt & Smolkov Reference Goldschmidt and Smolkov2006). It is assumed that using different media will lead to a change in the interaction between student and tutor and, therefore, affect LCE. Research recommends alternating between educational media in design studios (Rodriguez, Hudson, & Niblock Reference Rodriguez, Hudson and Niblock2018). This emphasises the need to enable comparative studies to assess the support that different media provide to LCE in studio education.
Measuring LCE in studio crits is particularly important, as this educational approach is considered to be learner-centred (Danko & Duarte Reference Danko and Duarte2009). However, studio assessment methodologies mainly focus on the quality of the resultant artefact, lacking the capability of assessing the effectiveness of crit interactions on designer’s behaviour (Webster Reference Webster2008; De la Harpe et al. Reference De La Harpe, Peterson, Frankham, Zehner, Neale, Musgrave and McDermott2009; Sevgül & Güçlü Yavuzcan Reference Sevgül and Güçlü Yavuzcan2022). Several studies found frequent tutor dominance in crit interactions (Goldschmidt, Hochman, & Dafni Reference Goldschmidt, Hochman and Dafni2010; Milovanovic & Gero Reference Milovanovic, Gero, Marjanović, Clarkson, Lindemann, McAloone and Weber2018; Sawyer Reference Sawyer2019). This emphasises a need for assessing whether student–tutor crit interactions indeed facilitate LCE. This becomes more important with the spread of studio-based learning. Examples of studio-based learning can be found in computer science (Polo, Silva, & Crosby Reference Polo, Silva and Crosby2018) and engineering (Danko & Duarte Reference Danko and Duarte2009; Bone et al. Reference Bone2021).
While these studies identify persistent challenges in supporting learner-centred behaviours, they fall short in offering methods to systematically measure LCE in crit interactions. This gap limits the ability to assess the effectiveness of studio-based learning in implementing LCE.
1.1. Aim and significance
The study reported in this paper aims to quantify LCE within studio crit interactions. Such quantification would enable comparative studies and evidence-based implementation in studio-based education.
We extend existing research on studio assessment methodologies and elaborate on the ways LCE can be integrated in studio-based education as well as in other disciplines by allowing assessment through comparative studies. We present a measurable approach grounded in observable design behaviour to quantify LCE. This metric, described in detail in Section 3, enables researchers and educators to assess the presence and effect of learner-centred education through two core dimensions: (i) student engagement and (ii) active practice. In doing so, the study responds to methodological gaps in existing LCE research, particularly in design-based education settings. By introducing an empirically grounded metric, this study offers a replicable approach for measuring learning during studio crits.
2. Theoretical background
The literature in this section serves as a theoretical basis to establish a quantifiable definition and quantitative metrics for LCE in dialogic interaction. We first present a brief review of the current knowledge on LCE and its components in Education Science. This is followed by a description of the studio’s ILOs, which correspond with LCE components, and the current assessment methodologies employed that restrict assessing LCE in design crits.
2.1. Learner-centred education
Dialogic interaction is considered a foundational constituent in the conversational construction of the learning process (Dozois Reference Dozois2001). In this conversational context, LCE involves learning and teaching behaviours supporting of, and resulting in, the learner’s part in a dialogic student–tutor interaction. During the interaction, the tutor’s role is to provide the student opportunities to participate in the discussion, reflect on what she has learned and be involved in making decisions (Bremner Reference Bremner2021). LCE has fundamental benefits for supporting students in handling changing demands, expressing their thoughts, and, most significantly, from a cognitive perspective for enhancing learning (Bremner Reference Bremner2021). This aligns with Bloom’s revised taxonomy, which emphasises learning as an active competency (Krathwohl Reference Krathwohl2002). These include engagement and responsiveness (Schweisfurth Reference Schweisfurth2015), learner–tutor interaction (Cornelius-White Reference Cornelius-White2007), professional development (Olofson & Garnett Reference Olofson and Garnett2017) and knowledge and skill acquisition (Logeswaran et al. Reference Logeswaran, Munsch, Chong, Ralph and McCrossnan2021).
Reports describe LCE as a fundamental approach for empowering learners to take responsibility for learning and gaining active learning, ideation and problem-solving skills (GUNi 2022). Measured through students’ scores, LCE was found to largely support higher achievements, compared to teacher-centred education (Kahl & Venette Reference Kahl and Venette2010), demonstrating the significant role of LCE in learning. Studies about student-tutor interaction showed a positive relationship in supporting knowledge transfer, reflecting the advantages of LCE (Bremner et al. Reference Bremner, Sakata and Cameron2022). LCE has been implemented in engineering education, mainly through studio-based learning, to encourage independent learning and team progress (Danko & Duarte Reference Danko and Duarte2009). LCE implemented in blended learning engineering courses utilising in-person and online settings face student engagement challenges (Behara, Sibanda, & Magenuka Reference Behara, Sibanda and Magenuka2024), necessitating comparative assessments between the two settings.
These studies call for fostering and enhancing LCE in dialogic student–tutor interaction. However, there is a lack of adequate methods to measure LCE. This probably has to do with LCE’s inconsistent definitions and varied definitional constituents, which restrict conducting quantitative assessments (Bremner Reference Bremner2021). Post-course surveys, taken in engineering education, convey learners’ experience but cannot provide quantitative assessments or compare the course with a similar course (Danko & Duarte Reference Danko and Duarte2009).
2.1.1. Active practice and learner engagement
Learner engagement is an essential component of LCE. Educational psychologists Mayer & Wittrock (Reference Mayer, Wittrock, Berliner and Calfee2004) define learning as a change in the problem-solver’s cognitive behaviours. In the learning process, the learner recruits prior knowledge or skills and applies them to handle a new problem situation and achieve a solution. To do so, the learner needs to be engaged in the process.
Researchers define engagement as a construct of three factors (Fredricks, Blumenfeld, & Paris Reference Fredricks, Blumenfeld and Paris2004; Greene Reference Greene2015; Wang et al. Reference Wang, Li, Tan, Zhang and Lajoie2023; Zhou & Ye Reference Zhou and Ye2024), referring to (1) emotional engagement, which includes a learner’s attitude and values; (2) behavioural engagement, which refers to the learner’s involvement in the course (e.g., class attendance) and (3) cognitive engagement, which refers to the efforts made by the learner in comprehending a task through handling new information.
While emotional and behavioural engagement (factors 1 and 2) are essential to support motivation and participation in studio learning, this study focuses on cognitive engagement (factor 3). The decision to prioritise factor 3 is considered the first step in developing the LCE metric. The focus on cognitive engagement is consistent with Megahed (Reference Megahed2018), who emphasises the importance of reflective and dialogic processes in achieving meaningful learning outcomes in the studio.
Engagement involves the active development of existing knowledge structures by integrating new information (Greene Reference Greene2015). Greene (Reference Greene2015) distinguishes between deep and shallow processing within cognitive engagement. Deep processing handles thoughtful, complex problems and involves the integration of new information in existing knowledge, whereas shallow processing handles less complicated tasks such as memorisation. Zhou & Ye (Reference Zhou and Ye2024) further focus on the dynamic, temporal aspect of cognitive engagement in problem-solving. Considering that LCE encourages the learner in being responsible for achieving progress (Bremner Reference Bremner2021), the occurrence of new solutions introduced by the learner can reflect engagement in problem-solving. In contrast, an increase in the tutor’s introduction of new solutions may reflect her dominance in the interaction and hinder effective learning, risking the benefits derived from LCE. It is worth noting that new solutions introduced by the tutor play a role in formative crits, as they may stimulate the learner to develop them. The latter is seen as the successful act of knowledge transfer gained from the tutor’s support, practised by the learner. It follows that the tutor’s role in LCE is to facilitate an active dialogue rather than dominate the interaction (Cornelius-White Reference Cornelius-White2007; Kahl & Venette Reference Kahl and Venette2010). This aligns with a recent study that found a positive correlation between formative assessment and learner engagement in written tasks (Zheng & Xu Reference Zheng and Xu2023).
The literature cited above characterises LCE as involving the learner’s engagement and active intended learning behaviours in problem-solving, stimulated and supported by a tutor (Cornelius-White Reference Cornelius-White2007; Kahl & Venette Reference Kahl and Venette2010; Bremner Reference Bremner2021; Zhou & Ye Reference Zhou and Ye2024). This grounds the LCE metric within the existing theory and serves as the first step to achieve the aim of this study.
2.2. The studio’s intended learning objectives
The main goal of the design studio is to equip students with professional design behaviours that will enable them to initiate and drive the design process independently. These behaviours are considered, therefore, as the studio’s learning objectives (ILOs). They include identifying relevant structural, social or functional issues, generating solutions and refining them throughout the process. For example, a student may notice a corridor looks too narrow and begin to explore how its configuration affects circulation or use. In crit interactions, the generation of design issues and the transition between the cognitive states of these issues by the student reflect the active practice of these ILOs. Within this practice, generating new design issues is another major ILO. This refers to the first time a certain design issue appears during a design session. This activity is associated with creativity and divergent thinking (Guilford Reference Guilford, Jenkins and Paterson1961; Goel Reference Goel2014; Gabora Reference Gabora2018), considered to be responsible for producing progress through design alternatives and as stimuli for generating additional design issues (Goldschmidt Reference Goldschmidt2016; Kupers & van Dijk Reference Kupers and van Dijk2020). As the studio prioritises individual progress, a learner’s cognitive engagement in generating new design issues during crits is a basic indicator of an effective and creative learning process.
Accomplishing the studio’s ILOs is challenging. Design problems are considered ‘wicked’ problems (Rittel & Webber Reference Rittel and Webber1973). They carry an undetermined end and, thus, are difficult to evaluate and progress from. This suggests that generating new issues can be challenging for students who lack professional design skills. By introducing issues during crit interaction, the tutor’s feedback serves as an effective strategy to stimulate active practice in generating design issues. This is reflected in a study focusing on ideation in studio crits that showed the tutor contributed at least 30 per cent of the ideas generated during the interaction (Goldschmidt & Tatsa Reference Goldschmidt and Tatsa2005).
2.3. The studio’s assessment problem
Despite having clear ILOs, the assessment methodologies used in the design studio during crit interactions focus on the quality of the learning outcomes while lacking measurements of the skills gained or the objectives accomplished (De la Harpe et al. Reference De La Harpe, Peterson, Frankham, Zehner, Neale, Musgrave and McDermott2009; Sevgül & Güçlü Yavuzcan Reference Sevgül and Güçlü Yavuzcan2022; Webster Reference Webster2008). Other methods propose assessing the design skills gained using progressive or final written reports (Sevgül & Yavuzcan Reference Sevgül, Yavuzcan and Altuntaş2023; Ejichukwu, Smith, & Ayoub Reference Ejichukwu, Smith and Ayoub2024). Focused on such outcomes, these assessment methods cannot assess LCE during crits.
The lack of quantitative measurements prevents tutors from reframing the interaction, nor can it provide the student with access to her strengths and weaknesses. For example, crit interactions are often found to be tutor-dominated. This is reflected in the tutor’s generation of a larger number of verbal utterances and design issues compared to the student (Goldschmidt et al. Reference Goldschmidt, Hochman and Dafni2010; Milovanovic & Gero Reference Milovanovic, Gero, Marjanović, Clarkson, Lindemann, McAloone and Weber2018; Sawyer Reference Sawyer2019). These teaching behaviours were shown to create hidden hierarchies reported to hinder students’ engagement (Boling, Gray, & Smith Reference Boling, Gray and Smith2020; Dutton Reference Dutton1987; Lodson & Ogbeba Reference Lodson and Ogbeba2020). This assessment problem is also the case for all types of studio assessments, including group crits, written feedback and final reviews and grades (Sevgül & Güçlü Yavuzcan Reference Sevgül and Güçlü Yavuzcan2022). These interaction-centred challenges suggest the need to find ways to encourage LCE in design crits. Furthermore, since some assessment methodologies are not capable of assessing the accomplishment of ILOs quantitatively, they are not suited for measuring LCE or assessing the effectiveness of crit interactions in facilitating LCE.
3. Research objectives and questions
Given the current gaps in assessing LCE and its effectiveness for accomplishing the ILOs of design studios, this study aims to quantify LCE within studio crit interactions, to enable (1) assessing its effectiveness in accomplishing the studio’s ILOs and (2) its utility in conducting comparative studies. This research addresses the following questions:
-
1. How can we quantify LCE in design studio crits to enable quantitative empirical assessments?
-
2. How can we assess LCE’s role in design studio crits with regard to the studio’s ILOs?
-
3. How can we assess LCE within crit interactions when using different educational settings?
To answer these questions, the following section articulates the proposed LCE metric. Subsequently, we demonstrate this LCE metric by applying it in a comparative case study, indicating its feasibility in accomplishing the stated objectives.
4. Measuring LCE in studio crit interaction
To measure LCE within crit interactions (RQ1), we examine the suitability of existing methods in assessing LCE within crit interactions and with regard to the studio’s ILOs. Based on the literature, the criteria for quantifying LCE include the learner’s active practice and engagement during tutor-student interaction. In design crits, active practice accounts for the learner’s generation of design issues and transitions between these issues. Engagement is reflected in the ratio of the learner’s and tutor’s first introduction of design issues.
To track and quantify the components that contribute to this LCE measure, we combined two techniques, using the first occurrence (FOs) of design issues (Gero & Kan Reference Gero and Kan2016) to measure engagement and the function–behaviour–structure (FBS) ontology (Gero Reference Gero1990; Gero & Kannengiesser Reference Gero and Kannengiesser2004) to measure active practice. Both are embedded within the studio’s ILOs and derived from the verbalisations that occur during the interactions between students and tutors, hence their selection for quantifying LCE. Previous research offers methods to quantify design behaviours (Hay et al. Reference Hay, Duffy, McTeague, Pidgeon, Vuletic and Grealy2017). For example, Linkography (Goldschmidt Reference Goldschmidt2014) measures idea sequences occurring in the design process. However, these methods either lack alignment with specific ILOs or are not suitable for coding and quantifying verbal interactions with precision. In contrast, FOs and FBS offer theoretically grounded, design process-relevant, empirically validated methods.
4.1. Quantifying cognitive engagement in design crits
Quantifying cognitive engagement in design crits refers to the student’s capability to introduce new information into existing information (Greene Reference Greene2015). In this sense, the iterative generation of existing issues reflects active practice, whereas the focus on newly introduced issues serves as a proxy for engagement. Measuring the FO of a design issue in a design session is a syntactic method for measuring cognitive engagement and progress (Gero & Kan Reference Gero and Kan2016). The first time a word appears in the design session (e.g., “courtyard”), it is considered to be an FO. Considering the tutor’s role in crit interaction through stimulating the learner into further participation, engagement is accounted by the ratio of learner’s to tutor’s FOs. We acknowledge its limitations in distinguishing between similar instances of the same concept (e.g., addressing different courtyards) or between concepts which are semantically close. These will be addressed in future analyses using semantic similarity to assess differences.
By analysing the relationship of the FOs contributed by the students to those from the tutor, this metric provides new insight into the student’s cognitive engagement, defining it to be dependent on the learner’s instructional context (Vygotsky Reference Vygotsky, Gauvain and Cole1978). A ratio with a value above 1 indicates the student is more proactive in introducing design issues than the tutor, reflecting higher engagement. Conversely, a ratio below 1 suggests the tutor is leading the crit.
4.2. Quantifying active practice in design crits
Active practice is reflected in how the learner processes function, behaviour, and structure aspects by transitioning between those issues using the FBS ontology (Gero Reference Gero1990; Gero & Kannengiesser Reference Gero and Kannengiesser2004). The FBS ontology was chosen as it is widely used when studying design processes (Hay, Cash, & McKilligan Reference Hay, Cash and McKilligan2020) (Figure 1). The FBS ontology is used here as a coding scheme tracking designer’s behaviours from verbalised design issues related to an artefact or one of its parts and the transitions between these issues.

Figure 1. FBS ontology (Gero Reference Gero1990; Gero & Kannengiesser Reference Gero and Kannengiesser2004).
The FBS ontology describes the cognitive behaviours that an individual performs when designing, referring to the basic practices of the design process, taught and learned in the studio settings. It has been used to describe the interaction during design crits (Masclet & Boujut Reference Masclet and Boujut2010; Nespoli, Hurst, & Gero Reference Nespoli, Hurst and Gero2021). The ontology is described as follows:
Requirements (R) refers to the expectations given to the designer by the client. Function (F) refers to the artefact’s intended purpose, its teleology. Structure (S) describes the artefact’s parts and their relationships. The artefact’s behaviour describes how a structure fulfils its use. Behaviour is either the expected behaviour (Be), expected by the designer, or a behaviour assessed from the structure (Bs). Description (D) refers to external representations.
Changes occurring during the design process are referred to as transitions. Transitions in the FBS ontology map onto design processes, serving as evidence for one’s proficiency in processing the issues raised to achieve progress. The FBS ontology includes eight transition categories containing a total of twelve transitions, as follows:
Formulation (transition 1) develops the design requirements into a function (R→F). A function can be developed to a behaviour expected to fulfil this function (F→Be). Synthesis 2 (transition 2) refers to the transition from an expected behaviour to a structure (Be→S) that is expected to fulfil this behaviour. Analysis (transition 3) produces a behaviour from structure (S→Bs). Evaluation (transition 4) is the comparison of behaviour derived from structure to the expected behaviour (Bs↔Be) or its complement, the comparison of the expected behaviour with that derived from structure (Be↔Bs), examining whether the design meets its expectations. Documentation (transition 5) generates a description for the artefact’s structure (S→D). Reformulation type 1 (transition 6) refers to changes made by the designer to the structure when it has not met expectations. Reformulation type 2 (transition 7) refers to the changes made by the designer to expected behaviours when the structure is evaluated to be unsatisfactory (S→Be). Reformulation type 3 (transition 8) refers to changes made by the designer to functions when the behaviour is evaluated to be unsatisfactory (S→F) (Gero & Kannengiesser Reference Gero, Kannengiesser, Chakrabarti and Blessing2014). In this study, we used a further synthesis transition: Synthesis 1, which refers to the transition from function to structure (F→S) (Kannengiesser & Gero Reference Kannengiesser, Gero, Eriksson and Paetzold2019). We added a further synthesis transition: Synthesis 3, which describes a transition of behaviour from structure to structure (Bs→S). This process indicates an iteration on structure to fulfil the behaviour better, which is a subset of transition 6.
For example, in architectural design, the function of a courtyard, coded as Structure (S), can be processed into an expected behaviour to consider shaded (Be) sitting spots (S). The transition from Structure to Be will be coded as Reformulation type 2, while the subsequent transition from Be to S will signify Synthesis. By providing a detailed description of behaviours and processes carried out by professional designers, the FBS ontology presents a direct link to the studio’s ILOs and therefore, is potentially suitable to be applied to the context of crits to enable quantifying active practice in LCE.
To measure LCE using the FBS ontology, active practice has to do with evidence demonstrating the student’s capacity in handling design issues, reflected through generating and processing them. The ratio between the generation of FBS issues and the number of transitions made provides a new measure, representing the student’s deep processing (Greene Reference Greene2015). Active practice within the interaction is therefore measured by the ratio of the transitions and the FBS issues generated by the student. Focusing on processing rather than the ability to introduce issues, this measure also accounts for tutor-generated issues that are repeated by the student as part of learning through mimicking the activity (Schön Reference Schön1987; Megahed Reference Megahed2018). Distinguishing between newly introduced and previously introduced issues also underscores the assumption that repeated issues represent an increased focus on a particular part of the artefact, reflecting progress (Casakin et al. Reference Casakin, Sopher, Gero and Anidjar2024).
4.3. A quantified measure of LCE in design crits
Embedded in the studio’s ILOs, both LCE’s components can be measured within student–tutor crit interactions, allowing for assessing the effectiveness of crit interactions (RQ1). The relationship between engagement and active practice determines LCE within the interaction, allowing for flexibility in enhancing LCE through either of the two components. LCE measurement is expressed in Equation 1.
$$ {\displaystyle \begin{array}{c}\hskip-11em \mathrm{LCE}=\mathrm{learner}\ \mathrm{engagement}\times \mathrm{active}\ \mathrm{practice}\\ {}\hskip-7.5em =\frac{\mathrm{FOs}\left(\mathrm{student}\right)}{\mathrm{FOs}\;\left(\mathrm{tutor}\right)}\times \frac{\mathrm{Transitions}\;\left(\mathrm{student}\right)}{\mathrm{FBS}\;\left(\mathrm{student}\right)}\end{array}} $$
A value higher than 1 indicates that LCE took place during the session.
This LCE formulation, therefore, captures not only student participation but also their accomplishment of ILOs. By multiplying these two components, which are treated as independent but pedagogically linked, the LCE metric emphasises that the two aspects are present, avoiding overrepresentation of surface-level contributions.
5. Case study
To demonstrate how this LCE measure can be applied to demonstrate its role in design crits (RQ1 and RQ2) and its use in comparing and contrasting media settings (RQ3), we present a natural case study of single student–tutor crit interactions. We analysed existing data collected from a third-year studio course taught at the Faculty of Architecture and Town Planning, Technion. The data collection and FO analysis were part of a former study (Sopher & Gero Reference Sopher, Gero, Stojakovic and Tepavcevic2021). This formed the basis of this natural case study, where student and tutor performance were independent variables. ILOs in this course correspond with the studio’s objectives. The design brief required the design of a public building in the context of an existing electricity station, including a programme, plans, sections and a 3D digital model. Student–tutor crit interactions in both settings corresponded with the traditional studio model, taking the form of a reflective discussion and interaction with design representations prepared by the student (Schön Reference Schön1987). The course had two weekly sessions over a 14-week timeline. Crits alternated between two different design representation media settings: immersive VR representations of 3D models at a life-size scale (iVR) and non-immersive representations, including scaled hand drawings, physical models and desktop CAD (NI). This enabled demonstrating the applicability of LCE metrics in a comparative study (RQ2).
5.1. Data description and tools
Data consist of the recordings and transcriptions of the verbalisations uttered during three pairs of consecutive crits (representing early, mid- and final-semester crits) of a third-year student and the same tutor.
Figure 2 shows an example of the student–tutor interaction using the two media. Figure 3 illustrates the crit sessions used in this case study throughout the course timeline. Crits using NI media took place at the faculty studio rooms. The iVR comprises a concave screen that exceeds the human field of view and synchronised sensors that allow a single user to navigate in a 3D artefact at a real scale and experience a sense of presence in the digital display (Slater et al. Reference Slater2022). Other participants share presence experience through 3D glasses, allowing both tutor and student to have a shared view during the crit. Alternating between iVRs and traditional settings in studio-based education is recommended, hence the choice of this experiment (Rodriguez et al. Reference Rodriguez, Hudson and Niblock2018). Artefacts are prepared ahead of time by using digital modelling software. The crits in this study were given after student training (see Figure 3), reducing the possibility of bias due to a lack of experience in using the medium. However, variations in technological skills or individual comfort with 3D navigation may influence the student’s behaviour, as acknowledged in the limitations of this study. The student was observed to use both media outlined in Figure 3. This served as the basis for choosing her as the subject of this study.

Figure 2. Student–tutor crit interaction using the iVR (left) and NI media (right).

Figure 3. The course outline of 24 crits (2 crits per week) and the crits included in the case experiment.
5.2. Coding and analysis
Two coders and an arbitrator coded and analysed the transcripts using the FBS ontology by dividing the verbalisations into segments. A segment is produced to meet the criterion of containing only one FBS code. The agreement between coders and the final arbitrated version, as measured by Cohen’s kappa, is 0.64, within the range of substantial agreement. Table 1 exemplifies the coding and analysis process. Since the data collected were in Hebrew, we translated the example to English.
Table 1. Example from a protocol analysis using the FBS ontology. T=Tutor, S=Student

Once final arbitrated codes were obtained, we were able to calculate the distributions of design issues and transitions generated by the student and the tutor for the iVR and the NI crits. FOs are identified syntactically within the text (Gero & Kan Reference Gero and Kan2016). We then calculated the LCE in each crit. The results were normalised as rates to remove differences in the durations of the crits.
6. Indicative results
This section presents LCE calculated for each crit to demonstrate the results obtainable. LCE can be enhanced through either of its components, offering a flexible approach to enhance learner progress. To demonstrate the reasoning behind the metric, we provide a detailed description of LCE components, engagement and active practice, while using the iVR and NI media. Since this is a case study used to demonstrate the applicability of the LCE measure, we do not conduct statistical tests or draw generalised conclusions from these results. However, the indicative results show that this measure of LCE can be used to discriminate between different sessions and different media in those sessions.
The analysis resulted in 3,929 FBS-coded segments. The FOs generated by the student and the tutor were coded in a previous study (Sopher & Gero Reference Sopher, Gero, Stojakovic and Tepavcevic2021).
6.1. LCE
Table 2 presents the LCE for iVR and NI crits as an outcome of engagement and active practice.
Table 2. LCE, calculated for two different educational settings for three crits (1, 2 and 3) for the iVR and NI media

The results in Table 2 show differences in LCE generated within the two media types. All iVR crits had a higher LCE compared to the NI crits, indicating that in this case study, this iVR medium supported LCE better. Such results, if generalised, can support the integration of iVRs to enhance LCE in design studio crits, which tend to suffer from tutors’ dominance (Goldschmidt et al. Reference Goldschmidt, Hochman and Dafni2010; Milovanovic & Gero Reference Milovanovic, Gero, Marjanović, Clarkson, Lindemann, McAloone and Weber2018; Sawyer Reference Sawyer2019). The early semester iVR-1 crit had the highest LCE value, comprising a 1.07 ratio of student-tutor FOs, indicating higher student engagement, and a 0.5 active practice indicator of transitions developed from design issues. Comparatively, the early-semester NI-1 crit had the lowest LCE, with the lowest student engagement. The interaction had a higher student engagement but reduced practice in processing the issues generated. Compared to crit iVR-2, findings from crit NI-2 showed a higher value in active practice but a lower engagement, resulting in a similar LCE to crit iVR-2. Considering the contribution of LCE to the learning process (Bremner Reference Bremner2021), the results provide a flexible opportunity to enhance LCE within crit interactions regardless of any specific ILO.
Surprisingly, the results in this single case study show a decrease in LCE towards the end of the semester, mainly due to a decline in engagement. This may occur due to the proximity to the final semester review, leading the student to be more receptive to the tutor’s suggestions regarding final improvements to their design. The increase in active practice may also point to this possible inference.
6.2. Student engagement values
This section presents the FOs generated by the student and the tutor that determine engagement values. Results from a previous study retrieved a total of 1,227 FOs generated by the student and the tutor during the six crits (Sopher & Gero Reference Sopher, Gero, Stojakovic and Tepavcevic2021). Results have shown that 59% of FOs were generated by the student when using the iVR, compared to 41% in the NI media. Table 3 presents the FOs generated by the student and the tutor for each crit, resulting in the student’s engagement shown in Table 1.
Table 3. Distribution of FOs per minute, generated by the tutor and the student for each crit

The results presented in Table 3 exhibit a higher rate of FOs generated by the student when using the iVR compared to NI crits. This implies the medium’s capacity to support student engagement. The early crit iVR-1 had more student FOs compared to the tutor, demonstrating support in student engagement during this crit (Table 2). The parallel NI-1 crit had the lowest rate of FOs introduced by the student while exhibiting an increase in the tutor’s FOs, suggesting a tutor-centric interaction and leading to low engagement. Mid-semester crit iVR-2 had the highest student generation of FOs, with a decrease in the tutor’s FOs, implying better support for student engagement. Both final semester crits, iVR-3 and NI-3, exhibit similarity in the rate of FOs for each participant, suggesting lower engagement, regardless of the educational setting involved, for this student–tutor pair.
6.3. Student active practice values
This section presents the student’s values of active practice for each medium, determined by the generation of FBS issues and transitions between issues. This provides a fine-grained analysis of the studio’s ILOs practised by the student in the interactions that comprise LCE within each medium. While results from such analyses are not new, discussing the values obtained in relation to active practice adds new understandings of the reasoning behind LCE.
Figure 4 shows the percentage of FBS issues per minute (top) and the percentage of transitions per minute (bottom) generated by the student in the early, mid- and final semester crits.

Figure 4. Student’s values of FBS issues (%) (top) and transitions (%) (bottom) in the iVR and NI crits: d. Crits iVR-1 and NI-1; e. Crits iVR-2 and NI-2; f. Crits iVR-3 and NI-3.
Differences for active practice were found between the two media used. An increase in the student’s practice of FBS issues in crits iVR-1 and iVR-3, compared to her activity in the parallel NI crits. During iVR crits, the student generated more Structure (S) issues and Behaviour from structure (Bs) than the parallel NI crits. The increased generation of S and Bs issues by the student in the iVR implies the medium’s support in this practice. All iVR crits showed a higher rate in practising Synthesis types 2 and 3 (Be→S and Bs→S), Analysis (S→Bs) and Reformulation-1 (S→S) transitions. Interestingly, the iVR better supported Evaluation (Bs→Be), a transition focused on comparing the existing behaviour to expected behaviours, while the NI setting better supported the opposite transition (Be→Bs). This suggests an advantage for blended learning environments for accomplishing the studio’s ILOs through active practice in different settings. The issues generated and processed characterise the student’s active practice within each medium.
Early semester crits, showed interesting differences (Figure 4a,d). In contrast to crit NI-1, crit iVR-1 had dominance in practising all FBS issues, in particular the Function (F) behaviour, responsible for introducing design issues related to the intended use of the artefact. These indicative results align with a former study analysing two mid-semester crits (Sopher, Casakin, & Gero Reference Sopher, Casakin, Gero, Pak, Wurzer and Stouffs2022) and expand them to demonstrate practice throughout the semester. Compared to crit NI-1, this iVR crit had more transitions between issues, suggesting the medium’s support in active practice in the early learning stage.
The pair of mid-semester crits shows a decrease in the student’s active practice during the iVR-2 crit, compared to the parallel NI crit, in particular, F, Be and D issues and the transitions between these issues. Crit iVR-2 had a higher practice of Structure (S) issues and Behaviour from structure (Bs) issues and transitions between them compared to the parallel crit NI-2 (Figure 4b,e). The high engagement score, along with the decrease in active practice in the iVR-2 crit (Table 2), can characterise the student’s LCE in being more focused on introducing issues but lacking the capacity to develop them. Such a result may imply of fixation or call for enhanced instructional support. The final crits reflected more similarity in the student’s practice when using the two media, with minor higher values for S, Be and F issues in the iVR-3 crit, and a higher generation of D issues in the parallel NI-3 crit (Figure 4c,f). The higher rate in transitions has led to higher active practice scores in the iVR-3 crit, compared to its parallel NI-3 crit.
These results highlight how this LCE metric can be used across sessions and also between media formats, providing a foundation for deeper pedagogical interpretation for specified ILOs, addressed in the following section.
7. Discussion
In response to the reported gaps in measuring LCE quantitatively and assessing its implementation in studio crits, this study proposed a novel method to measure LCE in design learning situations and tested its capability in a natural case study of comparative media in a design studio. Focused on describing and demonstrating the LCE method, this research did not provide hypotheses to be tested. Therefore, the method developed and the results exemplified in a case study are discussed as follows: First, we interpret the results to identify the conclusions that can be drawn when applying the method in research to extend current knowledge. Second, insights that can be drawn from applying this assessment method to studio-based education are discussed. Finally, we outline the method’s strengths and challenges and draw possibilities for future development. Limitations of this study are stated.
7.1. Interpretation of indicative results
The indicative results demonstrate the use of the method in a comparative study assessing the effectiveness of LCE for accomplishing the ILOs of design studios within crit interactions in two different educational settings. By applying the proposed LCE metric, it was possible to measure learner engagement and active practice, considered to be the building blocks of LCE (Bremner Reference Bremner2021). Crit interactions using the iVR had higher LCE values compared to traditional NI crits. This pattern was found throughout the semester stages for the case tested. The early and final semester crits had particularly large differences, with low LCE values in the NI crits, indicative of a failure in implementing LCE effectively within the interaction with the student. A higher cognitive engagement within iVR crits aligns with former studies finding iVRs’ support in creative thinking and learners’ motivation (Chang, Kao, & Wang Reference Chang, Kao and Wang2022; Obeid & Demirkan Reference Obeid and Demirkan2023). The cognitive engagement value proposed in the current research extends these studies by offering a fine-grained analysis, enabling temporal and within-subject results. Active practice results show a higher ratio in processing the design issues generated in the early and final semester iVR crits. These indicative results align with former studies analysing tutor-student crit interactions (Goldschmidt et al. Reference Goldschmidt, Hochman and Dafni2010; Milovanovic & Gero Reference Milovanovic, Gero, Marjanović, Clarkson, Lindemann, McAloone and Weber2018; Sawyer Reference Sawyer2019). Together with the student’s high engagement values in those crits, such findings, if generalised, can indicate an effective implementation of LCE in support of the studio’s ILOs.
It is worth noting that both low and high LCE are required in the learning process (Sopher et al. Reference Sopher, Casakin, Gero, Pak, Wurzer and Stouffs2022). The results from this temporal case study exemplify the method’s potential in gaining new knowledge to implement LCE throughout time. Results from LCE assessment can potentially serve as evidence for developing blended learning studio syllabi (Rodriguez et al. Reference Rodriguez, Hudson and Niblock2018; Behara et al. Reference Behara, Sibanda and Magenuka2024). Such insights do not contradict good tutor practice, rather, they can highlight the need for integrating specific educational technology to enhance LCE-based interaction.
7.2. Insights for studio-based education
The demonstration of LCE assessment in design studio crits provides useful insights for educators implementing the studio-based education model. The results from this case study delineate the learner’s accomplishments of the studio’s ILOs, criticised to be rarely assessed (De la Harpe et al. Reference De La Harpe, Peterson, Frankham, Zehner, Neale, Musgrave and McDermott2009; Sevgül & Güçlü Yavuzcan Reference Sevgül and Güçlü Yavuzcan2022; Webster Reference Webster2008). Such information can support educators in providing learners with feedback adapted to their progress, thereby advancing customised teaching. Tutors receiving information on how they framed crit interactions may alter their teaching behaviours to better facilitate LCE.
Most importantly, although studio-based education is commonly assumed to inherently support LCE (Megahed Reference Megahed2018), this assumption lacks empirical validation. Therefore, developing evidence-based metrics to quantitatively assess LCE within studio interactions is essential for verifying its actual presence.
While the demonstration study does not address institutional policies, it contributes to evidence-based pedagogical practices that can inform curriculum development at the studio or departmental level. For example, by coordinating instructional strategies with respect to LCE across different years of study.
Since the results provided are limited, it is worth addressing additional possible results. For instance, if the numerator of LCE gets closer to or larger than 1, a situation which becomes possible with engagement ratios that are greater than 1. While such ratios are desired as the studio encourages LCE during crits, such a result can imply weak instructional support. In opposite cases, LCE values lower than 0.1 can occur if the numerators are equal to or smaller than 0.3 (e.g., engagement value 0.2 and active practice value 0.3 will result a 0.06 LCE value), indicative of failure in the implementation of LCE or in the student’s capacities in both engagement and active practice. Such results can direct educators towards taking different pedagogical approaches to support the student in accomplishing the studio’s ILOs.
7.3. Challenges and limitations
The LCE method faces a challenge derived from the labour required for coding in protocol analysis, which limits immediate implementation by educators. However, the advent of artificial intelligence used for training large language models holds promise in providing automated solutions to this end (Siddharth, Blessing, & Luo Reference Siddharth, Blessing and Luo2022). Nonetheless, findings in this study provide preliminary evidence that this method for measuring LCE can provide the basis for discriminating between different studio sessions.
As a case study monitoring a single student and a single tutor, it is not possible to draw conclusions that can be generalised. Since the crits took place in a continuous learning process, the chain effect of one crit on subsequent crits is unknown. The studio is a rich educational setting. Sessions can have a multitude of reasons and may not indicate the difference in LCE. The variables tested may be influenced by former sessions or by variations in technological skills or individual comfort with using the iVR. Future studies may consider a longer training period.
While the studio covers many more practices which require quantitative assessment, such as problem framing or collaborative behaviour (De la Harpe et al. Reference De La Harpe, Peterson, Frankham, Zehner, Neale, Musgrave and McDermott2009), they exceed the focus of this study in quantifying LCE within crit interactions.
8. Concluding remarks
Following situated learning theories, we characterised LCE through student–tutor interaction to enable assessing whether it is supported in design crits. LCE metrics account for the student’s active practice and cognitive engagement in introducing, generating and processing design issues, considered the main ILOs in design studios. To demonstrate applicability in comparative empirical studies, the method was applied in a natural case study of an architecture studio course that used two different media settings during the crits. Three pairs of crits using the iVR and NI media that took place during early, mid- and final semester phases were analysed. The study demonstrates the utility of using the LCE method to examine how different educational settings may influence learner-centred behaviour in studio crits. iVRs were chosen as a case study for their unique advantage in design education in providing students with a life-sized view of the design shared with their tutors. In this case study, an examination of the effects of two media on LCE components showed that these media contributed differentially to the values of each component. With further validation, the method could support assessment of LCE across a broader range of studio-based environments.
LCE is measured through a relation between a learner’s active practice and engagement. This means that in a given setting, although a learner can be highly engaged (with the tutor’s support), he or she can experience difficulties in generating design issues or transitions. Defining LCE as the outcome of this relationship enables the tutor to design the educational setting to support LCE. Furthermore, the relationship between LCE’s components allows an LCE to comprise low engagement and high active practice to be equal to the outcome LCE of the opposite (i.e., high engagement and poor practice). This raises the question concerning the established relationship and whether the components are equal in value. In this sense, how the evaluation of LCE values accounts for temporal differences commonly found in education, such as crit phases or the different benchmarks for accomplishing learning objectives, remains another open question.
8.1. Suggestions for future research
Future research may focus on refining the method developed in this study to account for additional ILOs in studio-based learning (e.g., problem framing, project management), constructs of engagement, such as emotional and behavioural (Fredricks et al. Reference Fredricks, Blumenfeld and Paris2004; Zhou & Ye Reference Zhou and Ye2024) or enable automated protocol analyses. The latter may support the development of real-time feedback that can be used by tutors to custom-tailor their teaching to the student. This would allow tutors to better frame the interaction to increase engagement and active practice. Exploring the application of advanced analytical techniques, such as natural language processing algorithms (Casakin et al. Reference Casakin, Sopher, Gero and Anidjar2024), to analyse large datasets of student–tutor interactions has the potential to enhance the scalability and efficiency of assessing LCE, enabling longitudinal studies. While this study did not collect direct qualitative input from the tutor, studies assessing tutors’ reactions to the LCE metric can provide additional directions on how the method should be developed. Finally, comparative studies examining the effectiveness of different educational environments in fostering LCE within design studios could inform the evidence-based integration of educational settings.
Acknowledgements
We thank Associate Professor Fisher-Gewirtzman, Faculty of Architecture and Town Planning, Technion, and the student for their agreement to participate in this study. The study was generously supported by Ariel University, grant no. RA2400000285. Part of J. S. G.’s time was supported by the U.S. National Science Foundation grant number EEC-1929896.
Competing interests
The authors declare none.
Ethical standards
This study has IRB approval from the ethics committee for non-interventional research at the University of Nantes, IRB number IORG0011023 and Ariel University, IRB number AU-ARC-HS-20231109.


