We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Conversation-analytic (CA) research projects have begun to involve the collection of interaction data in laboratory settings, as opposed to field settings, not for the purpose of experimentation, but in order to systematically analyze interactional phenomena that are elusive, not in the sense of being rare (i.e., ‘seldom occurring’), but in the sense of not being reliably or validly detected by analysts in the field using relatively standard recording equipment. This chapter (1) describes two, CA, methodological mandates – ‘maintaining mundane realism’ and ‘capturing the entirety of settings’ features’ – and their tensions; (2) provides four examples of elusive phenomena that expose these tensions, including gaze orientation, blinking, phonetic features during overlapping talk, and inhaling; and (3) discusses analytic ramifications of elusive phenomena, and provides a resultant series of data collection recommendations for both field and lab settings.
This chapter explores the performative power of research methods, and specifically the power of audio-visual technologies – the video camera - in capturing and re-presenting data concerning markets, their innovation and transformation. Our claim is that cameras and their research outputs, are engines of change. They act as market-making devices that not only inform but additionally perform markets. After reflecting on the Market Studies conceptualization of markets and the role of the camera as a market-making device, we show how the camera provides new perspectives, generates a deeper understanding of context and opens-up new opportunities for the reflexive action that has the power to transform both our understandings of markets and how they are performed. We draw on our own experiences and extant research to consider how the camera acts as a socio-technical and sentimental device to shake up existing practices, generating opportunities for new data collection, analysis and new ways of seeing, representing and expressing the politics of markets – transforming them in the process. We argue that i) zooming-out, ii) zooming-in, iii) refocusing, iv) slowing-down action and motion, and v) editing, have the capacity to generate and reveal different constitutive components of a market’s sociomaterial realities by drawing attention to actors, objects and emotions, and their relations with their wider social setting. We conclude that cameras can constitute different realities, breaking down taken for granted binaries, between society and nature, opening opportunities to build moral markets in new and innovative ways.
Following the COVID-19 pandemic, assessments by video link became a standard and acceptable form of medico-legal evaluation. The various challenges to achieving an accurate and robust medico-legal assessment via a remote platform are explored in this clinical reflection. It is concluded that any limitations to a remotely undertaken assessment must be highlighted to the court and an in-person assessment considered as a reasonable alternative in some cases.
This chapter examines the topic of Puccini on video – the composer’s appearances as a character in narrative films or television dramas, and versions of his operas conceived expressly for film or television, starting from the arrival of sound cinema. Puccini started being fictionalised as a film character from the 1950s. Detailed attention is paid to Carmine Gallone’s biopic Giacomo Puccini of 1952, a film which takes considerable liberties with historical facts, and the same director’s Casa Ricordi, about the composer’s publishing house. The discussion then moves to the 1980s, to consider Tony Palmer’s Puccini, and to the 2000s, to discuss Paolo Benvenuti’s Puccini e la fanciulla, both of which home in on the Doria Manfredi scandal. The author then discusses transferrals to video of Puccini’s operas, concentrating particularly on Tosca and examining films from the 1940s onwards. Particular attention is paid to a filmed version by Gallone, to the same director’s Avanti a lui tremava tutta Roma, and to a series of slightly later film versions made especially for Italian television during the 1960s and 70s. A 2001 film of Tosca by Benoît Jacquot concludes the survey, chosen because it interrogates the tension between televisual or filmic authenticity and operatic artificiality.
This chapter examines constitutional theory and doctrine as applied to emerging government regulations of video image capture and proposes a framework that will promote free speech to the fullest extent possible without facilitating unnecessary intrusions into legitimate privacy interests.
This chapter examines constitutional theory and doctrine as applied to emerging government regulations of video image capture and proposes a framework that will promote free speech to the fullest extent possible without facilitating unnecessary intrusions into legitimate privacy interests.
An electroencephalogram (EEG) is a critical tool in epilepsy diagnosis. The three common EEG durations are 25 minutes, 1 hour, and 24 hours. One-hour EEGs are superior in showing epileptiform abnormalities, while 24-hour EEGs are used to characterize seizure and nonepileptic event semiology and guide treatment of status epilepticus. The term EEG montage refers to the way EEG electrodes are ordered for interpretation. Odd numbered electrodes are on the left, with even numbered on the right. Smaller numbers are closer to the midline, while z means the electrode is on the midline.This chapter will explore the numerous normal and variant findings like posterior dominant rhythm (PDR) and wicket spikes. Epileptiform findings like sharp waves or seizure patterns are indicative of epilepsy. Slowing or increased amplitude can indicate cerebral changes that are not epileptiform.Electroencephalogram reports should concisely accurately convey both the electrical findings and their clinical relevance to patient care. Electroencephalogram reports should indicate an epilepsy diagnosis only when clear electrical evidence exists.
Instructional information can be categorised as being either transient or permanent. Spoken information or videos provide examples of transient information, while written information or static graphics provide examples of permanent information. The major characteristic of transient information is that current information, once presented, disappears to be replaced by new information, with the old information difficult to access. Permanent information, once presented, remains available and accessible for the duration of the instructional episode. The transient information effect or principle can be demonstrated when the same information is presented in either transient form such as speech or permanent form such as identical, written text. The effect occurs when learning is facilitated by the permanent version of the information. Cognitive load theory can be used to explain such results.
Once presented the interactional perspective on sensoriality proposed by this book (Chapter 1), Chapter 2 offers a methodology able to document and to analyze embodied sensory engagements in social interaction. It discusses fieldwork, video-recordings, and multimodal transcriptions, as well as alternate approaches, showing the coherence and adequacy of a video and multimodal methodology for studying multisensoriality. It also presents the empirical case that will be developed in the book, focusing on food as an exemplary field in which all the senses play a crucial role. It presents the field of study, an exemplary activity in which participants sensorially engage with food: practices of looking, touching, smelling, and tasting cheese in gourmet shops. The empirical data on which the remaining of the book is based are video-recordings of shop encounters between cheese sellers and customers, gathered in a dozen of cities in Europe, drawing on a dozen of different languages. This unique and rich corpus of video data enables to develop a systematic analysis of the detailed way in which it is possible, within a praxeological, interactional, multimodal approach, to study multisensoriality in action.
The American avant-garde theatre of the post-World War II era, with its underlying engagement with the betterment of society and a foregrounding of the body, either solo or collective, could be seen as an extension of the Romantic project. But by the 1990s, the ideas and impulses that fueled its artistic drive seemed to dissipate as it became subsumed by Postmodernism and also by popular culture. The avant-garde energies and impulses did not disappear, however, and increasingly they could be found in the theatre’s eager adoption and exploration of new technologies and digital media. By mediatizing live performance, the new technologies often became co-equal with, or dominant over, the human actors. Beginning with groups and individual artists such as Squat, The Wooster Group, and Laurie Anderson and continuing through The Builders Association, Big Art Group, and Annie Dorsen, among many others, a post-avant-garde has emerged that does not fetishize technology, but rather embraces it as a tool to alter consciousness—much as the historical avant-garde did—and to expand the possibilities and definitions of performance.
Chapter 6 deals with the legal and political forces determining the visibility of Israeli state violence against young Palestinians. First, it examines three ways in which Israel subjects Palestinians to its gaze or pressures them to internalize it: putting up threatening posters with photographs of Palestinian youth or their parents; taking pictures of unsuspected Palestinian youth; and soldiers filming their abuse of young Palestinians. Second, this chapter lays bare a range of Israeli practices and discursive techniques operating to conceal, downplay, and legitimize violence against young Palestinians: the prevention of such violence from being witnessed in real time; the destruction of incriminating evidence; restrictions on publishing unflattering information; the failure to record interrogations; torture methods that leave no physical marks; legally sanctioned secrecy; the impunity of alleged perpetrators; and their depiction as merely a few rotten apples. Finally, the chapter offers a rethinking of evidence. Israel and its human rights critics tend to privilege video footage and state agents’ testimonies, thereby validating both Israel’s dismissal of uncorroborated Palestinian allegations and its “rotten apples” narrative. It is argued that alternative types of evidence foreground the representation at work and can thus highlight the invisibility shrouding both state violence and young witnesses.
This chapter reviews five decades of research on reactions to mirrors and self-recognition in nonhuman primates, starting with Gallup’s (1970) pioneering experimental demonstration of self-recognition in chimpanzees and its apparent absence in monkeys. Taking a decade-by-decade approach, developments in the field are presented separately for great apes on the one hand, and all other primates on the other (prosimians, monkeys, and so-called lesser apes), considering both empirical studies and theoretical issues. The literature clearly shows that among nonhuman primates the most compelling evidence for something approaching human-like visual self-recognition is seen only in great apes, despite an impressive range of sometimes highly original procedures employed to study many monkey species. In the past decade, research has been shifting from simple questions about whether great apes can self-recognize (now considered beyond doubt), to addressing possible biological bases for individual and species differences in the strength of self-recognition, analysis of possible adaptive functions of the capacity for self-visualization, and searching for evidence of self-recognition in a range of nonprimate species.
YouTube is increasingly used as a source of healthcare information. This study evaluated the quality of videos on YouTube about cochlear implants.
Methods
YouTube was searched using the phrase ‘cochlear implant’. The first 60 results were screened by two independent reviewers. A modified Discern tool was used to evaluate the quality of each video.
Results
Forty-seven videos were analysed. The mean overall Discern score was 2.0 out of 5.0. Videos scored higher for describing positive elements such as the benefits of a cochlear implant (mean score of 3.4) and scored lower for negative elements such as the risks of cochlear implant surgery (mean score of 1.3).
Conclusion
The quality of information regarding cochlear implant surgery on YouTube is highly variable. These results demonstrated a bias towards the positive attributes of cochlear implants, with little mention of the risks or uncertainty involved. Although videos may be useful as supplementary information, critical elements required to make an informed decision are lacking. This is of particular importance when patients are considering surgery.
High-quality behavioural data can be recorded using cheap and simple technologies such as checks sheets and sound recorders. Advances in technologies for data recording have made big data available to behavioural scientists, which in turn has stimulated the development of AI technologies for automated data processing. A data pipeline describes the workflow of data recording, processing and analysis, including details of the technologies used in each step. The choice of technology for capturing behavioural data will depend on the research question and the resources available, the quantity of data required, where the data is to be collected, the amount of interaction with subjects and the likely impact of the technology on the subjects and their environment. Data that are initially recorded in a relatively rich form will require subsequent processing to code behavioural metrics. Coding of data can be either manual or automated using rules-based approaches and machine learning.
Edited by
Hamit Bozarslan, Ecole des Hautes Etudes en Sciences Sociales, Paris,Cengiz Gunes, The Open University, Milton Keynes,Veli Yadirgi, School of Oriental and African Studies, University of London
This chapter problematizes the film modes of Kurdish presence to address the need for a radicalization of artistic tools in the name of artistic autonomy. The industrial mode of cinema’s desire for national totalities is valid in the Kurdish case through a pedagogy of the real to place traumatized Kurdishness in a victimhood discourse in the name of the recognition of Kurdish languages, and to claim its own popular narrations within limits defined by hegemonic powers - i.e. the officially recognized space for Kurdish languages in movie theatres. The limits of this very popular audience are also flagged by international film festivals’ taste and room for them. Within such a historical and political context, which sets Kurdish cinematography as a discursive tool within capitalist film modes, a claim for truth-telling emerges as the domestication of non-linear and non-smooth conflict zones in favour of a consumable form of Kurdish culture.
This chapter addresses the use of technological media in contemporary adaptations of Greek tragedies that have used the form, narratives, and cultural cachet of Greek tragedy to create work that engages spectators in examinations of human culture and behavior which have deep historical and emotional resonance, even when the productions themselves are destabilising and sometimes undermining the cultural position of their ancient Greek referents. The approaches span a large gamut from the use of video as scenography to the immersion of the audience in theatrical landscapes fragmented through media. Central to the discussion are artists such the Wooster Group, Jay Scheib, John Jesurun, and Jan Fabre, who use technology to create intermedial effects that express and interrogate the relationship of media to contemporary culture and representation. These works manage to encapsulate the rapidly changing modes of discourse, both live and mediated, and the ever-increasing problematics of representation in a media-saturated world.
Dr. Fischer traces the process of transformation from the neurochemistry of motivation and John Schumann's derivative stimulus appraisal theory, where need comes from relevance, the potential for self and social status, novelty, pleasantness, and, above all, the ability to cope. Technology tools – learning management systems, video and audio recording, use of films and videos for culture study, and relevant applications (apps) – are discussed as means to develop coping with the language and culture, thereby increasing the probability of learner transformation to intrinsically motivated persons able to appreciate and learn from others and continue that process throughout their adult lives.
Videolaryngoscopes have been in existence for several decades but in the last decade have taken a central role in both difficult and routine airway management. During that time videolaryngoscopy has not only become embedded in most difficult airway algorithms but the technique has become part of core airway management skills and the use of awake videolaryngoscopy has increased. This chapter describes the various types of videolaryngoscopes, their roles, strengths and limitations. Strategies to optimise use of Macintosh and hyperangulated devices are described as well as which adjuncts are best suited to their use. The issue of ‘can see, cannot intubate’ is discussed along with techniques to overcome it. The role of videolaryngoscopy outside the operating theatre, in critical care, in the emergency department and in pre-hospital care is discussed in this and other chapters.
The coronavirus disease 2019 (COVID-19) pandemic introduced challenges to the use of simulation, including limited personal protective equipment and restricted time and personnel. Our use of video for in situ simulation aimed to circumvent these challenges and assist in the development of a protocol for protected intubation and simultaneously educate emergency department (ED) staff. We video-recorded a COVID-19 respiratory failure in situ simulation event, which was shared by a facilitator both virtually and in the ED. The facilitator led discussions and debriefs. We followed this with in situ run-throughs in which staff walked through the steps of the simulation in the ED, handling medications and equipment and becoming comfortable with use of isolation rooms. This application of in situ simulation allowed one simulation event to reach a wide audience, while allowing participants to respect social distancing, and resulted in the education of this audience and successful crowdsourcing for a protocol amidst a pandemic.
Conventional tests with written information used for the evaluation of sign language (SL) comprehension introduce distortions due to the translation process. This fact affects the results and conclusions drawn and, for that reason, it is necessary to design and implement the same language interpreter-independent evaluation tools. Novel web technologies facilitate the design of web interfaces that support online, multiple-choice questionnaires, while exploiting the storage of tracking data as a source of information about user interaction. This paper proposes an online, multiple-choice sign language questionnaire based on an intuitive methodology. It helps users to complete tests and automatically generates accurate, statistical results using the information and data obtained in the process. The proposed system presents SL videos and enables user interaction, fulfilling the requirements that SL interpretation is not able to cover. The questionnaire feeds a remote database with the user answers and powers the automatic creation of data for analytics. Several metrics, including time elapsed, are used to assess the usability of the SL questionnaire, defining the goals of the predictive models. These predictions are based on machine learning models, with the demographic data of the user as features for estimating the usability of the system. This questionnaire reduces costs and time in terms of interpreter dedication, as well as widening the amount of data collected while employing user native language. The validity of this tool was demonstrated in two different use cases.