We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What makes one sentence easy to read and another a slog that demands rereading? Where do you put information you want readers to recall? What about details you need to reveal but want readers to forget? Drawing on cognitive neuroscience, psychology, and psycholinguistics, this book provides a practical guide on how to write for your reader. Its chapters introduce the five 'Cs' of writing – clarity, continuity, coherence, concision, and cadence – and demonstrate how to use these features to bring your writing to life. This science-based guide also shows you how to improve your writing while also making the writing process speedier and more efficient. Brimming with examples, this humorous, surprisingly irreverent book provides writers with the tools they need to master everything from an email to a research project. If you believe good writers are simply born that way, Writing for the Reader's Brain will change your mind – and, quite possibly, your life.
“Writing Is a System” debunks the popular view that writing is an art, best learned by reading selections of good writing and practicing composing. Instead, writing is a system that involves understanding what factors make sentences seem easy to read and paragraphs well organized. This chapter also examines the relevance of readability scores in assessing writing.
A useful way to prepare the public for disasters is to teach them where to get information. The purpose of this study is to evaluate the readability and appropriateness of the content of websites prepared for the public on disaster preparedness.
Methods
In September-October 2022, we evaluated 95 disaster preparedness websites (intended for the public) using the Ateşman Readability Index, JAMA criteria, DISCERN, and a new researcher-created content comparison form. Evaluation scores were compared according to information sources.
Results
Of the websites included in the research, 45.2% represented government institutions (GIG), 38.0% non-profit organizations (NPOG), 8.4% municipal organizations (MOG), and 8.4% other organizations (OG). Those which scored above average on the websites were 36.8% on the content evaluation, 51.6% on the DISCERN scale, 53.7% on the Ateşman Readability Index, and 55.8% on the JAMA criteria. The content evaluation form showed that the scores of the websites belonging to the MOG were higher than the scores of the other websites. Others group websites also scored higher than altered websites on the JAMA criteria.
Conclusions
The study revealed that websites created to increase public knowledge on disaster preparedness are not good enough in terms of readability, quality, and content.
Researchers have taken great interest in the assessment of text readability. This study expands on this research by developing readability models that predict the processing effort involved during first language (L1) and second language (L2) text reading. Employing natural language processing tools, the study focused on assessing complex linguistic features of texts, and these features were used to explain the variance in processing effort, as evidenced by eye movement data for L1 or L2 readers of English that were extracted from an open eye-tracking corpus. Results indicated that regression models using the indices of complex linguistic features provided better performance in predicting processing effort for both L1 and L2 reading than the models using simple linguistic features (word and sentence length). Furthermore, many of the predictive variables were lexical features for both L1 and L2 reading, emphasizing the importance of decoding for fluent reading regardless of the language used.
Patient and public involvement (PPI) groups can provide valuable input to create more accessible study documents with less jargon. However, we don't know whether this procedure improves accessibility for potential participants.
Aims
We assessed whether participant information sheets were rated as more accessible after PPI review and which aspects of information sheets and study design were important to mental health patients compared with a control group with no mental health service use.
Method
This was a double-blind quasi-experimental study using a mixed-methods explanatory design. Patients and control participants quantitatively rated pre- and post-review documents. Semi-structured interviews were thematically analysed to gain qualitative feedback on opinions of information sheets and studies. Two-way multivariate analysis of variance was used to detect differences in ratings between pre- and post-review documents.
Results
We found no significant (P < 0.05) improvements in patient (n = 15) or control group (n = 21) ratings after PPI review. Patients and controls both rated PPI as of low importance in studies and considered the study rationale as most important. However, PPI was often misunderstood, with participants believing that it meant lay patients would take over the design and administration of the study. Qualitative findings highlight the importance of clear, friendly and visually appealing information sheets.
Conclusions
Researchers should be aware of what participants want to know about so they can create information sheets addressing these priorities, for example, explaining why the research is necessary. PPI is poorly understood by the wider population and efforts must be made to increase diversity in participation.
In the reading section of this chapter, we look at how much vocabulary is needed to gain meaning-focused input through reading material written for native speakers. We then look at what a well-balanced reading program for learners of English as a foreign language should contain to maximise vocabulary growth, stressing the need to use vocabulary graded material, particularly graded readers. Such a course should provide opportunities for extensive reading, a focus on language features through intensive reading, and the development of reading fluency though speed reading. Finally, we look at how learners can be supported to read ungraded texts, using techniques such as narrow reading, pre–teaching, intensive reading, and glossing. In order to gain 98 per cent coverage of unsimplified text, learners need to know most of the high-frequency and mid-frequency words, totalling around 8,000–9,000 word families. In the writing section of this chapter, we look at the effect of vocabulary use on the quality of writing, measuring written productive knowledge of vocabulary and how to improve learners’ vocabulary use in writing.
Reading comprehension and fluency are crucial for successful academic learning and achievement. Yet, a rather large percentage of children still have enormous difficulties in understanding a written text at the end of primary school. In this context, the aim of our study was to investigate whether text simplification, a process of reducing text complexity while keeping its meaning unchanged, can improve reading fluency and comprehension for children learning to read. Furthermore, we were interested in finding out whether some readers would benefit more than others from text simplification as a function of their cognitive and language profile. To address these issues, we developed an iBook application for iPads, which allowed us to present normal and simplified versions of informative and narrative texts to 165 children in grade 2. Reading fluency was measured for each sentence, and text comprehension was measured for each text using multiple-choice questions. The results showed that both reading fluency and reading comprehension were significantly better for simplified than for normal texts. Results showed that poor readers and children with weaker cognitive skills (nonverbal intelligence, memory) benefitted to a greater extent from simplification than good readers and children with somewhat stronger cognitive skills.
Knowledge-based AI typically depends on a knowledge engineer to construct a formal model of domain knowledge – but what if domain experts could do this themselves? This paper describes an extension to the Decision Model and Notation (DMN) standard, called Constraint Decision Model and Notation (cDMN). DMN is a user-friendly, table-based notation for decision logic, which allows domain experts to model simple decision procedures without the help of IT staff. cDMN aims to enlarge the expressiveness of DMN in order to model more complex domain knowledge, while retaining DMNs goal of being understandable by domain experts. We test cDMN by solving the most complex challenges posted on the DM Community website. We compare our own cDMN solutions to the solutions that have been submitted to the website and find that our approach is competitive. Moreover, cDMN is able to solve more challenges than any other approach.
Patient and public involvement can improve study outcomes, but little data have been collected on why this might be. We investigated the impact of the Feasibility and Support to Timely Recruitment for Research (FAST-R) service, made up of trained patients and carers who review research documents at the beginning of the research pipeline.
Aims
To investigate the impact of the FAST-R service, and to provide researchers with guidelines to improve study documents.
Method
A mixed-methods design assessing changes and suggestions in documents submitted to the FAST-R service from 2011 to 2020. Quantitative measures were readability, word count, jargon words before and after review, the effects over time and if changes were implemented. We also asked eight reviewers to blindly select a pre- or post-review participant information sheet as their preferred version. Reviewers’ comments were analysed qualitatively via thematic analysis.
Results
After review, documents were longer and contained less jargon, but did not improve readability. Jargon and the number of suggested changes increased over time. Participant information sheets had the most suggested changes. Reviewers wanted clarity, better presentation and felt that documents lacked key information such as remuneration, risks involved and data management. Six out of eight reviewers preferred the post-review participant information sheet. FAST-R reviewers provided jargon words and phrases with alternatives for researchers to use.
Conclusions
Longer documents are acceptable if they are clear, with jargon explained or substituted. The highlighted barriers to true informed consent are not decreasing, although this study has suggestions for improving research document accessibility.
Obtaining informed consent is a fundamental and ethical practice within human subjects’ research. Informed consent forms (ICFs) include a large amount of information, much of which may be unfamiliar to research subjects, and the revised Common Rule resulted in several required additions to that language. As limited health literacy impacts many potential subjects, efforts should be made to optimize subjects’ ability to read and understand ICFs. In this brief report, we describe an assessment of ICFs at an academic medical center to evaluate longitudinal changes in readability with the introduction and update of a plain language ICF template.
This chapter discusses how, and why, to write your program in a way which is as easy as possible for another human to understand. For example, we discuss how to use comments, choose informative names, lay out your code clearly, and structure it so that it does not resemble spaghetti.
The purpose of this study was to assess the readability of information on the Internet posted about coronavirus disease 2019 (COVID-19) to determine how closely these materials are written to the recommended reading levels.
Methods:
Using the search term “coronavirus,” information posted on the first 100 English language websites was identified. Using an online readability calculator, multiple readability tests were conducted to ensure a comprehensive representation would result.
Results:
The mean readability scores ranged between grade levels 6.2 and 17.8 (graduate school level). Four of the 5 measures (GFI, CLI, SMOG, FRE) found that readability exceeded the 10th grade reading level indicating that the text of these websites would be difficult for the average American to read. The mean reading level for nearly all noncommercial and commercial websites was at or above the 10th grade reading level.
Conclusions:
Messages about COVID-19 must be readable at an “easy” level, and must contain clear guidelines for behavior. The degree to which individuals seek information in response to risk messages is positively related to the expectation that the information will resolve uncertainty. However, if the information is too complex to interpret and it fails to lead to disambiguation, this can contribute to feelings of panic.
Murmurs are abnormal audible heart sounds produced by turbulent blood flow. Therefore, murmurs in a child may be a source of anxiety for family members. Families often use online materials to explore possible reasons for these murmurs, given the accessibility of information on the Internet. In this study, we evaluated the quality, understandability, readability, and popularity of online materials about heart murmur.
Methods:
An Internet search was performed for “heart murmur” using the Google search engine. The global quality score (on a scale of 1 to 5, corresponding to poor to excellent quality) and Health on the Net code were used to measure the quality of information presented. The understandability of the web pages identified was measured using the Patient Education Materials Assessment Tool (score range from 0 to 100%, scores below 70% reflect poor performance). The readability of each web pages was assessed using four validated indices: the Flesch Reading Ease Score, the Flesch–Kincaid Grade Level, the Gunning Frequency of Gobbledygook, and the Simple Measure of Gobbledygook. The ALEXA traffic tool was used to reference domains’ popularity and visibility.
Results:
We identified 230 English-language patient educational materials that discussed heart murmur. After exclusion, a total of 86 web pages were evaluated for this study. The average global quality score was 4.34 (SD = 0.71; range from 3 to 5) indicating that the quality of information of most websites was good. Only 14 (16.3%) websites had Health on the Net certification. The mean understandability score for all Internet-based patient educational materials was 74.6% (SD = 12.8%; range from 31.2 to 93.7%). A score suggesting these Internet-based patient educational materials were “easy to understand”. The mean readability levels of all patient educational materials were higher than the recommended sixth-grade reading level, according to all indices applied. This means that the level of readability is difficult. The average grade level for all web pages was 10.4 ± 1.65 (range from 7.53 to 14.13). The Flesch–Kincaid Grade level was 10 ± 1.81, the Gunning Frequency of Gobbledygook level was 12.1 ± 1.85, and the Simple Measure of Gobbledygook level was 9.1 ± 1.38. The average Flesch Reading Ease Score was 55 ± 9.1 (range from 32.4 to 72.9).
Conclusion:
We demonstrated that web pages describing heart murmurs were understandable and high quality. However, the readability level of the websites was above the recommended sixth-grade reading level. Readability of written materials from online sources need to be improved. However, care must be taken to ensure that the information of web pages is of a high quality and understandable.
Health translation readability assessment represents an important yet largely underexplored research area in translation studies. This chapter introduces an integrated analytical system developed for the computer-aided assessment of the readability of Chinese health translations. The system comprises two components which are a computerised Chinese text lexical profile analyser; and a data-driven statistical instrument that can be used to diagnose and label the readability level of Chinese translations and non-translated health education materials. The online Chinese lexical profile analyser was informed by recent research in corpus linguistics and Chinese educational literacy. It includes thirty-nine individual and compound lexical features to enable in-depth and systematic analyses of the lexical complexity and textual coherence of Chinese health education and promotion materials. The statistical instrument was developed using a large Australian Chinese Health Translation Corpus. The statistical instrument built contains two measurement scales which are related to the information load and the lexical technicality as two important indicators of the readability of Chinese health education resources. The study demonstrated the viability and effectiveness of developing digital analytical tools and instruments for the objective assessment of the readability of health materials, especially health translations which hold the key to the success and sustainability of health promotion and communication in multicultural societies with diverse population groups.
Text readability assessment is a challenging interdisciplinary endeavor with rich practical implications. It has long drawn the attention of researchers internationally, and the readability models since developed have been widely applied to various fields. Previous readability models have only made use of linguistic features employed for general text analysis and have not been sufficiently accurate when used to gauge domain-specific texts. In view of this, this study proposes a latent-semantic-analysis (LSA)-constructed hierarchical conceptual space that can be used to train a readability model to accurately assess domain-specific texts. Compared with a baseline reference using a traditional model, the new model improves by 13.88% to achieve 68.98% of accuracy when leveling social science texts, and by 24.61% to achieve 73.96% of accuracy when assessing natural science texts. We then combine the readability features developed for the current study with general linguistic features, and the accuracy of leveling social science texts improves by an even higher degree of 31.58% to achieve 86.68%, and that of natural science texts by 26.56% to achieve 75.91%. These results indicate that the readability features developed in this study can be used both to train a readability model for leveling domain-specific texts and also in combination with the more common linguistic features to enhance the efficacy of the model. Future research can expand the generalizability of the model by assessing texts from different fields and grade levels using the proposed method, thus enhancing the practical applications of this new method.
The final rule for the protection of human subjects requires that informed consent be “in language understandable to the subject” and mandates that “the informed consent must be organized in such a way that facilitates comprehension.” This study assessed the readability of Institutional Review Board-approved informed consent forms at our institution, implemented an intervention to improve the readability of consent forms, and measured the first year impact of the intervention.
Methods
Readability assessment was conducted on a sample of 217 Institutional Review Board-approved informed consents from 2013 to 2015. A plain language informed consent template was developed and implemented and readability was assessed again after 1 year.
Results
The mean readability of the baseline sample was 10th grade. The mean readability of the post-intervention sample (n=82) was seventh grade.
Conclusions
Providing investigators with a plain language informed consent template and training can promote improved readability of informed consents for research.
Information is key to patient informed choice and the internet is currently a major source of health information for adults in the UK. In order for the users to make use of the information it must be presented in a way that the user can understand. This depends on a number of factors one being that the document is written at the right level to be understood by the reader, readability.
Aim
The aim of this study was to assess the readability of radiotherapy-related documents on the internet and compare their levels to published norms.
Method
An internet search was undertaken using Google, to identify UK-based literature. Once identified documents were downloaded into Word and cleaned of punctuation other than that at the end of the sentence, documents were then analysed by the software package Readability Studio.
Results and conclusions
Documents tended to be written at too high a reading level, but the reading level had improved from a similar study conducted in 2006. The level of readability appears to show a relationship to the use of passive voice, which was very variable in the sample collected and reduction in the use of passive voice could help with the readability of the information.
Kripke recently suggested viewing the intuitionistic continuum as an expansion in time of a definite classical continuum. We prove the classical consistency of a three-sorted intuitionistic formal system IC, simultaneously extending Kleene’s intuitionistic analysis I and a negative copy C° of the classically correct part of I, with an “end of time” axiom ET asserting that no choice sequence can be guaranteed not to be pointwise equal to a definite (classical or lawlike) sequence. “Not every sequence is pointwise equal to a definite sequence” is independent of IC. The proofs are by Crealizability interpretations based on classical ω-models ${\cal M}$ = $\left( {\omega ,{\cal C}} \right)$ of C°.