We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Once the data are collected and cleaned, we can start to explore features of the network. Taking an initial look at descriptive network statistics is a good way to take an overview of the data and to spot red flags that signal a problem with the data entry or cleaning. The earlier these can be identified, the better. This chapter serves as a tutorial for using R to do so using the igraph package. It introduces the process of importing a data file into R and walks through the first things you might do with the data, including computing descriptive statistics of the structural features, integrating substantive features of nodes and links, and visualizing the network.
In epidemiological investigations, pathogen genomics can provide insights and test epidemiological hypotheses that would not have been possible through traditional epidemiology. Tools to synthesize genomic analysis with other types of data are a key requirement of genomic epidemiology. We propose a new ‘phylepic’ visualization that combines a phylogenomic tree with an epidemic curve. The combination visually links the molecular time represented in the tree to the calendar time in the epidemic curve, a correspondence that is not easily represented by existing tools. Using an example derived from a foodborne bacterial outbreak, we demonstrated that the phylepic chart communicates that what appeared to be a point-source outbreak was in fact composed of cases associated with two genetically distinct clades of bacteria. We provide an R package implementing the chart. We expect that visualizations that place genomic analyses within the epidemiological context will become increasingly important for outbreak investigations and public health surveillance of infectious diseases.
Realistic networks are rich in information. Often too rich for all that information to be easily conveyed. Summarizing the network then becomes useful, often necessary, for communication and understanding but, being wary, of course, that a summary necessarily loses information about the network. Further, networks often do not exist in isolation. Multiple networks may arise from a given dataset or multiple datasets may each give rise to different views of the same network. In such cases and more, researchers need tools and techniques to compare and contrast those networks. In this chapter, In this chapter, well show you how to summarize a network, using statistics, visualizations, and even other networks. From these summaries we then describe ways to compare networks, defining a distance between networks for example. Comparing multiple networks using the techniques we describe can help researchers choose the best data processing options and unearth intriguing similarities and differences between networks in diverse fields.
Type 2 diabetes (T2DM) poses a significant public health challenge, with pronounced disparities in control and outcomes. Social determinants of health (SDoH) significantly contribute to these disparities, affecting healthcare access, neighborhood environments, and social context. We discuss the design, development, and use of an innovative web-based application integrating real-world data (electronic health record and geospatial files), to enhance comprehension of the impact of SDoH on T2 DM health disparities.
Methods:
We identified a patient cohort with diabetes from the institutional Diabetes Registry (N = 67,699) within the Duke University Health System. Patient-level information (demographics, comorbidities, service utilization, laboratory results, and medications) was extracted to Tableau. Neighborhood-level socioeconomic status was assessed via the Area Deprivation Index (ADI), and geospatial files incorporated additional data related to points of interest (i.e., parks/green space). Interactive Tableau dashboards were developed to understand risk and contextual factors affecting diabetes management at the individual, group, neighborhood, and population levels.
Results:
The Tableau-powered digital health tool offers dynamic visualizations, identifying T2DM-related disparities. The dashboard allows for the exploration of contextual factors affecting diabetes management (e.g., food insecurity, built environment) and possesses capabilities to generate targeted patient lists for personalized diabetes care planning.
Conclusion:
As part of a broader health equity initiative, this application meets the needs of a diverse range of users. The interactive dashboard, incorporating clinical, sociodemographic, and environmental factors, enhances understanding at various levels and facilitates targeted interventions to address disparities in diabetes care and outcomes. Ultimately, this transformative approach aims to manage SDoH and improve patient care.
This chapter introduces data-driven research methods for theatre and performance. Drawing on two case studies, the chapter demonstrates how to define and identify data, how to collect and organize it, and how to analyse it through computational methods. Careful attention is paid to the tension between a rigorous data model and the uncertainty and ‘messiness’ present in data’s sources. The conclusion promotes data-driven thinking as a way to expand the context and scope of TaPS analyses and to encourage explicit reflection on the mental categories and models within which we understand performance.
Despite admonitions to address attrition in experiments – missingness on Y – alongside best practices designed to encourage transparency, most political science researchers all but ignore it. A quantitative literature search of this journal – where we would expect to find the most conscientious reporting of attrition – shows low rates of discussion of the issue. We suspect that there is confusion on the link between when attrition occurs and the type of validity it threatens when present, and limited connection to and guidance on which estimands are threatened by different attrition patterns. This is all exacerbated by limited tools to identify, investigate, and report patterns attrition. We offer the R package – attritevis – to visualize attrition over time, by intervention, and include a step-by-step guide to identifying and addressing attrition that balances post hoc analytical tools with guidance for revising designs to ameliorate problematic attrition.
The article is devoted to computer modeling, visualization, and synthesis of a digital antenna array used for transmitting and receiving signals in industrial applications. The paper proposes an iterative method for the amplitude-phase synthesis of an antenna array according to the requirements for the envelope of the side lobes. The proposed method allows to determine the complex amplitudes of the elements of a digital antenna array for any given weight function, based on the theorems of matrix theory. The difference of the method lies in the iterative procedure for choosing the weight function, taking into account the excess level of the side lobes of the digital antenna array. In this regard, in the course of solving the synthesis problem, a weight function was found that leads to the fulfilment of the requirements for radiation patterns and does not lead to a decrease in the directivity coefficient. The signal-to-noise ratio was used as a criterion. For the first time, an analytical expression is given for the formation of a weight function in the course of an iterative process that takes into account the requirements for the envelope of the side lobes. The operability and convergence of the proposed method was confirmed in the course of numerical studies on the example of a digital antenna array. The performed numerical analysis confirmed the effectiveness of the proposed synthesis method.
Large-scale multiplex tissue analysis aims to understand processes such as development and tumor formation by studying the occurrence and interaction of cells in local environments in, for example, tissue samples from patient cohorts. A typical procedure in the analysis is to delineate individual cells, classify them into cell types, and analyze their spatial relationships. All steps come with a number of challenges, and to address them and identify the bottlenecks of the analysis, it is necessary to include quality control tools in the analysis workflow. This makes it possible to optimize the steps and adjust settings in order to get better and more precise results. Additionally, the development of automated approaches for tissue analysis requires visual verification to reduce skepticism with regard to the accuracy of the results. Quality control tools could be used to build users’ trust in automated approaches. In this paper, we present three plugins for visualization and quality control in large-scale multiplex tissue analysis of microscopy images. The first plugin focuses on the quality of cell staining, the second one was made for interactive evaluation and comparison of different cell classification results, and the third one serves for reviewing interactions of different cell types.
In those moments, or seconds, or hours, when focus on our creative work overrides input from the outside world, we are in the altered state of a creative trance. Shaped by individual experiences, social circumstances, and cultural traditions, the creative trance is multifaceted and psychologically significant. Taking numerous forms throughout the sciences, arts, sports, and self-transformation, it can change the world. At age sixteen, Albert Einstein’s visualization of riding on a light beam became a basis for his theory of special relativity. It can also be the inner deliberations of both Mozart and Shostakovich, who composed finished music in their minds. Cross-culturally, meditation and sleep are central to a creative trance. The Native North American Chippewa weave dream images into their beadwork and banners. In addition to the trance of creation, there is an audience creative trance of reception that ranges from pleasant enjoyment to the overwhelming response of the Stendhal syndrome.
TBT-S helps parents and other Supports to be aware of unique YA developmental needs and conflicts to offer appropriate assistance toward AN recovery. An experiential activity on guided reflections can enhance empathy for YA development along with skill development and the Young Adult Behavioral Agreement.
Chapter 3 continues the focus on methodological concerns, outlining and demonstrating how typical acoustic analyses proceeds from fieldwork to acoustic measurements and data extraction to analysis.This treatment goes deeper into the nuts and bolts of "doing" sociophonetics, with more hand’s on procedures for the identification, processing and measurement of vowels and sibilantsThis chapter is intended to introduce readers to and provide the work flow for some basic analyses using common software.The survey of key methods and fundamental terms presented in this and the previous chapter provides a foundation for reading and understanding the approaches used in sociophonetic literature and in exploring the research presented in subsequent chapters.
This chapter highlights judicial file-work backstage. It is particularly interested in the socially distributed and materially mediated character of these practices, and zooms in on the techniques judges have developed to navigate case files accurately and efficiently. It also traces how these work practices were disrupted and rearticulation as a result of the digitization of legal case files. In so doing, this chapter shows how an emphasis on this non-human actor – the legal case file – can rearticulate understandings of judicial decision-making and rule-following that locate it in the “head of the judge”. Tracing how and where judges draw on the legal case file in their sense-making, this chapter instead treats both judicial thinking and seeing as empirically investigable phenomena, and suggests that our conceptions of legal practices can benefit from paying attention to the materiality of legal case files. In so doing, it treats case files not (only) as informational objects, but materially recalcitrant objects that shape and direct judicial attention in specific ways.
Germination experiments are becoming increasingly complex and they are now routinely involving several experimental factors. Recently, a two-step approach utilizing meta-analysis methodology has been proposed for the estimation of hierarchical models suitable for describing data from such complex experiments. Step 1 involves fitting models to data from each sub-experiment, whereas Step 2 involves combination estimates from all model fits obtained in Step 1. However, one shortcoming of this approach was that visualization of resulting fitted germination curves was difficult. Here, we describe in detail an improved two-step analysis that allows visualization of cumulated data together with fitted curves and confidence bands. Also, we demonstrate in detail, through two examples, how to carry out the statistical analysis in practice.
DocuSky is a personal digital humanities platform for humanities scholars, which aims to become a platform on which a scholar can satisfy all her digital needs with no direct IT assistance. To this end, DocuSky provides tools for a scholar to download material from the Web and prepare (annotating, building metadata) her material, a one-click function to build a full-text searchable database, and tools for analysis and visualization. DocuSky advocates the separation of digital content and tools. Being an open platform, it encourages IT developers to build tools to suit scholars’ needs, and it has already incorporated several popular Web resources and external tools into its environment. Interoperability is ensured through the format DocuXML. In addition to describing the design principles of DocuSky, we will show its main features, together with several important tools and examples. DocuSky was originally developed for Sinological studies. We are enriching it to work in other languages.
This chapter explores the scope of imagination in the classical tantric texts and brings its salient features to a global philosophical discourse. Tantric texts treat imagination as a faculty of the mind that can be cultivated to its fullest extent. In this paradigm, imagination is an inherent power of the self, and upon its proper channeling, it can be transformed into the faculty of creativity, an inherent property of the self that remains otherwise dormant. These texts prescribe meticulous visualization processes in order to explore the limits of imagination. Apparently, what these texts meant by imagination is distinct from daydreaming or pure fantasy, as there is an integration of memory and attention in the course of projecting the mind to some intended objects or events. The central argument of this chapter is that this treatment of imagination as a faculty and the application of visualization for enhancing the power of imagination has the potential to address some key aspects in the contemporary philosophical discourse on imagination. This understanding of imagination can also help us devise ways to transform a subject’s self-assessment in order to assist him or her in negotiating his or her role in the socially constructed reality.
Archaeologists are tasked with balancing a call to open data and the need to maintain confidentiality of sensitive archaeological site locations. Low-resolution mapping and data aggregation are the methods most commonly used to hide site locations; however, we understand little of the effectiveness of these practices. Trends in geomasking, obscuring observed geographic points, to anonymize public health data are suggested as a source of methods for sharing archaeological site data. Archaeologists have available to them a number of geomasking methods that balance open data and site security in different ways. Low-resolution mapping at several scales and random direction with fixed radius, random perturbation donut, and Gaussian donut techniques are tested on a set of archaeological site locations. Random perturbation donuts resulted in the best balance between obscuring archaeological locations and conveying observed spatial patterning. Researchers should carefully consider how they convey archaeological location data, as commonly used low-resolution scales may not provide the desired level of obscurity. Researchers should also be explicit as to how and why their methods of site visualization are chosen.
Objective: Develop an awareness of the variety of both simple and more nuanced data visualization tactics and tools available. Appreciate how to develop visual renderings that best meet the needs of data examination and storytelling for the audience in question.
This third edition capitalizes on the success of the previous editions and leverages the important advancements in visualization, data analysis, and sharing capabilities that have emerged in recent years. It serves as an accelerated guide to decision support designs for consultants, service professionals and students. This 'fast track' enables a ramping up of skills in Excel for those who may have never used it to reach a level of mastery that will allow them to integrate Excel with widely available associated applications, make use of intelligent data visualization and analysis techniques, automate activity through basic VBA designs, and develop easy-to-use interfaces for customizing use. The content of this edition has been completely restructured and revised, with updates that correspond with the latest versions of software and references to contemporary add-in development across platforms. It also features best practices in design and analytical consideration, including methodical discussions of problem structuring and evaluation, as well as numerous case examples from practice.
Edited by
Claudia R. Binder, École Polytechnique Fédérale de Lausanne,Romano Wyss, École Polytechnique Fédérale de Lausanne,Emanuele Massaro, École Polytechnique Fédérale de Lausanne
Ecologic sustainability assessments are of increasing importance in understanding the physical resource metabolism of urban systems. In Stockholm, the so-called Hammarby Model visualised important synergies in waste and energy flows in the Hammarby Sjöstad urban district and supported improved metabolic thinking. Following the success of this approach, the Eco-Cycle Model 2.0 for the Royal Seaport was developed in cooperation between KTH University and the City of Stockholm. The Eco-Cycle Model 2.0 can take account of more dimensions than the Hammarby Model, including overall and detailed descriptions of resource flows in a lifecycle perspective. Important starting points for the model were (1) global and local challenges concerning the use of resources, with specific relevance for urban development, (2) available models which visualise functions, resource flows, and resource synergies and (3) approaches to material, energy, and water accounting. The primary objective of the model is to show important connections and synergies between resource flows in a modern urban area. Secondary objectives that can be fulfilled in the long term are: to be a tool for the monitoring and follow-up of environmental objectives, to serve as a dynamic tool for the analysis of resource flows, and to contribute to improved urban planning.