We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The 1940s saw the reconciliation of mathematical wartime techniques with social scientific theorizing. This chapter examines how the economy was depicted as a huge optimization problem that would soon be solvable by electronic computers. Investigating input–output analysis as it was done at the Harvard Economic Research Project (HERP) under the directorship of Wassily Leontief illustrates the difficulties of making an economic abstraction work in measurement practice. Chapter 3 draws a trajectory to the Conference of Activity Analysis of 1949, where mathematical economists combined techniques of linear programming with what they saw as conventional economics. The move from planning tools to devices for theoretical speculation came along with a shift in modeling philosophies and notions of realism. Focusing entirely on mathematical formalisms and abandoning the concern with measurement brought about the main research object of the economics profession in the subsequent years: The economy as a flexible and efficient system of production in the form of a system of simultaneous equations. This was the economy that provided the primary point of reference for Solow’s model.
OpenAI is a research organization founded by, among others, Elon Musk, and supported by Microsoft. In November 2022, it released ChatGPT, an incredibly sophisticated chatbot, that is, a computer system with which humans can converse. The capability of this chatbot is astonishing: as well as conversing with human interlocutors, it can answer questions about history, explain almost anything you might think to ask it, and write poetry. This level of achievement has provoked interest in questions about whether a chatbot might have something similar to human intelligence and even whether one could be conscious. Given that the function of a chatbot is to process linguistic input and produce linguistic output, we consider that the most interesting question in this direction is whether a sophisticated chatbot might have inner speech. That is: might it talk to itself, internally? We explored this via a conversation with ‘Playground’, a chatbot which is very similar to ChatGPT but more flexible in certain respects. We put to it questions which, plausibly, can only be answered if one first produces some inner speech. Here, we present our findings and discuss their philosophical significance.
Computer modeling of specific psychological processes began over fifty years ago. Cognitive scientists do not use computers merely as tools, but also as inspiration about the nature of mental processes. Computational cognitive science has a long way to go. There are many unanswered questions.However, cognitive scientists believe that the mind/brain is in principle intelligible in terms of whatever turns out to be the best theory of what computers can do. The overview of cognitive science given in this chapter should suffice to show that significant progress has been made.
After a brief orientation to logic-based (computational) cognitive modeling, the necessary preliminaries are discussed in this chapter (e.g., what a logic is, and what it is for one to “capture" human cognition are explained). Three “microworlds" or domains that all readers should be comfortably familiar with (natural numbers and arithmetic; everyday vehicles; and residential schools, e.g., colleges and universities) are introduced, in order to facilitate exposition in the chapter. Then the ever-expanding universe U of formal logics, with an emphasis on three categories therein, is given: deductive logics having no provision for directly modeling cognitive states; nondeductive logics suitable for modeling rational belief through time without machinery to directly model cognitive states such as believes and knows; and finally, nondeductive logics that enable the kind of direct modeling of cognitive states absent from the first two types of logic. Then, there follows a focus spcifically on two important aspects of human-level cognition to be modeled in logic-based fashion: the processing of quantification, and defeasible (or nonmonotonic) reasoning. Finally, a brief evaluation of logic-based cognitive modeling is provided, together with some comparison to other approaches to cognitive modeling, and the future of the discipline is considered. The chapter presupposes nothing more than high-school mathematics of the standard sort on the part of the reader.
The burst of information generated by new statistical projects made the question of calculation paramount. The masses of data that the new National Sample Surveys yielded, and the increasing complexity of planning models, had made the state’s data processing needs evident. Chapter 3 reveals the campaign led by Mahalanobis and the Indian Statistical Institute to bring India its first computers. Unlike in other parts of the world, computers were not sought for military purposes in India. Instead, India pursued them because they were seen as a solution to central planning’s most knotty puzzle, that of big data. The chapter follows the decade-long quest to import computers from the United States, Europe, and the U.S.S.R, unearthing the Cold War politics in which it inevitably became embroiled. Overall, Part I of this book demonstrates the building of a technocratic, data-hungry, high-modernist state and its attempts to make the economic realm more legible.
The Indian planning project was one of the postcolonial world's most ambitious experiments. Planning Democracy explores how India fused Soviet-inspired economic management and Western-style liberal democracy at a time when they were widely considered fundamentally contradictory. After nearly two centuries of colonial rule, planning was meant to be independent India's route to prosperity. In this engaging and innovative account, Nikhil Menon traces how planning built India's knowledge infrastructure and data capacities, while also shaping the nature of its democracy. He analyses the challenges inherent in harmonizing technocratic methods with democratic mandates and shows how planning was the language through which the government's aspirations for democratic state-building were expressed. Situating India within international debates about economic policy and Cold War ideology, Menon reveals how India walked a tightrope between capitalism and communism which heightened the drama of its development on the global stage.
In an epilogue that examines the current shift from human to animatronic performers in film and video, this book concludes with a meditation on the contemporary ambition to “go viral.” The history of computer-generated imaging (CGI) technology suggests that, in its pixelated form, a new - if also contested - concept of self has emerged for the twenty-first century. While fantasies of disembodiment and photo-realistic avatars of domination and destruction persist in contemporary media, pixilation offers us an alternative model of self that presages a post-Anthropogenic era. By fracturing the contours of a reified identity, the pixelated image invites us to re-envision ourselves as multi-celled organisms capable of co-existing with other such life forms within a shared biosphere. Inscribed in this new performance form, in other words, is yet more evidence for the ways we seek to comprehend and adapt to changes in our ever-modernizing world.
Determining whom to trust and whom not to trust has been critical since the early days of ancient civilizations. However, with the increasing use of digital technologies, trust situations have changed. We communicate less face-to-face. Rather, we communicate with other people over the Internet (e.g., Facebook) or we interact with technological artifacts (e.g., chatbots on the Internet or autonomous vehicles). This trend towards digitalization has major implications. It affects both the role of trust and how we should conceptualize trust and trustworthiness. In this chapter, insights on phenomena related to trust in a digital world are reviewed. This review integrates findings from various levels of analysis, including behavioral and neurophysiological. The structure of this chapter is based on four different scenarios of trust in a digital world that were developed by the author. Scenario A describes a technology-free situation of human-human interaction. Scenario B outlines a situation of computer-mediated human-human interaction. Scenario C denotes a situation of direct human-technology interaction. Scenario D refers to a situation of computer-mediated human-technology interaction. The common denominator of all situations is that a human acts in the role of trustor, while the role of trustee can be either another human or a technological artifact.
Computerised neuropsychological assessments (CNAs) are proposed as an alternative method of assessing cognition to traditional pencil-and-paper assessment (PnPA), which are considered the “gold standard” for diagnosing dementia. However, limited research has been conducted with culturally and linguistically diverse (CALD) individuals. This study investigated the suitability of PnPAs and CNAs for measuring cognitive performance in a heterogenous sample of older, Australian CALD English-speakers compared to a native English-speaking background (ESB) sample.
Methods:
Participants were 1037 community-dwelling individuals aged 70–90 years without a dementia diagnosis from the Sydney Memory and Ageing Study (873 ESB, 164 CALD). Differences in the level and pattern of cognitive performance in the CALD group were compared to the ESB group on a newly developed CNA and a comprehensive PnPA in English, controlling for covariates. Multiple hierarchical regression was used to identify the extent to which linguistic and acculturation variables explained performance variance.
Results:
CALD participants’ performance was consistently poorer than ESB participants on both PnPA and CNA, and more so on PnPA than CNA, controlling for socio-demographic and health factors. Linguistic and acculturation variables together explained approximately 20% and 25% of CALD performance on PnPA and CNA respectively, above demographics and self-reported computer use.
Conclusions:
Performances of CALD and ESB groups differed more on PnPAs than CNAs, but caution is needed in concluding that CNAs are more culturally-appropriate for assessing cognitive decline in older CALD individuals. Our findings extend current literature by confirming the influence of linguistic and acculturation variables on cognitive assessment outcomes for older CALD Australians.
The present paper deals with computer based training approaches in psychiatric rehabilitation. First an overview of programs is presented, which are currently available on the software market in Germany and appear suitable for the training of patients with information processing (including sensorimotor) deficits. Then a description follows dealing with a clinical setting which may serve to illustrate how computer based cognitive training can be organized in a group format. Finally a couple of clinical studies which have aimed to determine the efficacy of PC based training programs in schizophrenic patients are briefly reviewed.
The concordance and degree of overlap between 13 diagnostic systems for schizophrenia, including the five European systems of Berner, Bleuler, Langfeldt, Pull and Schneider, were evaluated in a cross-sectional study (N = 51) taking the phase of illness (acute or residual) into account. The diagnostic assessments were processed by computer using a 183-item standardised checklist and a data-processing program in GW-Basic language. The inter-rater reliability, as assessed by Kappa coefficient, was good to excellent for each diagnostic system established by this method (K from 0.5 to 1). When comparing the concordance between pairs of 13 diagnostic systems for schizophrenia in acute and residual phase groups, results showed that only two significant relationships were not influenced by the phase of illness (Carpenter x RDC; Catego x Schneider), while 24 were. These included only two relationships in the acute group (Carpenter Catego; Carpenter Schneider) and 22 links between pairs of systems in the residual group. In the acute group, no diagnosis of schizophrenia, including duration criteria such as those of DSM III-R, Feighner, Langfeldt, Pull and RDC, was linked to other systems. In the residual group, the operational systems such as Catego, DSM III-R, Feighner, Newhaven, Pull and RDC had more than five relationships with the other systems whereas the non-operational systems of Bleuler, ICD9, Langfeldt and Schneider had less than four relationships with the others. Except Pull's criteria, the European diagnostic systems, in particular Berner's and Bleuler's, seemed to differ from the others because of the few relationships displayed. The results underline the importance of taking the phase of illness into account when comparing between studies utilizing different diagnostic systems for schizophrenia. They also show the relationships between European and international diagnostic systems, insufficiently established so far.
From December 1–31, 1997, the Department of Psychiatry and Psychotherapy in co-operation with the Department of Information Technology, University of Tübingen, Germany, organised the first virtual congress on psychiatry in the Internet. The congress was aimed at facilitating exchange of results of psychiatric studies and ideas and at stimulating discussion among interested colleagues. Almost 100 participants from 17 countries on four continents took part in this event. Sixteen contributions were presented and discussed. The problems and opportunities of this medium in the organisation and running of congresses are presented and discussed. The experience gained in this congress suggests that the Internet will find increasing use as a medium for medical congresses within the next few years.
The unique feature of this compact student's introduction to Mathematica® and the Wolfram Language™ is that the order of the material closely follows a standard mathematics curriculum. As a result, it provides a brief introduction to those aspects of the Mathematica® software program most useful to students. Used as a supplementary text, it will help bridge the gap between Mathematica® and the mathematics in the course, and will serve as an excellent tutorial for former students. There have been significant changes to Mathematica® since the second edition, and all chapters have now been updated to account for new features in the software, including natural language queries and the vast stores of real-world data that are now integrated through the cloud. This third edition also includes many new exercises and a chapter on 3D printing that showcases the new computational geometry capabilities that will equip readers to print in 3D.
Decision making in weed control is complex and time-consuming. Moreover, the structure of the available information does not facilitate the comparison of different herbicides. Indeed, information format can be the limiting factor in the performance of sophisticated computer programs intended to supply appropriate advice on weed control treatments. A relational database for decision support on chemical weed control has been developed. It uses a detailed structure by subdividing the information where possible. The database includes programs for entering, updating, and printing data, as well as programs for retrieving information and giving treatment advice. The information access on herbicides is organized around searches based on a specific crop and multiple weed species at their respective growth stages. Optimization of the selected herbicides is carried out and supplies the lowest number of herbicides controlling all the chosen weeds. Information on critical parameters for herbicide application such as varietal restrictions, rotational crops, and compatibility with other products is also interactively available.
With recent developments in technology, online tests and digital tools offer school psychologists and school counsellors alternate modes of assessment. These new technologies have the potential to increase accessibility to tests (through greater portability), allow school psychologists and school counsellors to service more students (through greater efficiency), enable practitioners to provide more comprehensive assessments, and build professional capacity. This article will outline some examples of online tools and their benefits for time-poor school psychologists and school counsellors as well as identify ethical implications to be considered when adopting new technologies.
This note describes an approach for teaching undergraduate students basic properties of cost and production functions with the aid of computer-generated illustrations. Parameters of a third degree polynomial are derived by making use of both the theoretical restrictions and the specific agricultural production process. The function is then used as the basis for the development of a simple computer graphics program which generates illustrations of the corresponding MPP, AC, and MC functions. Such illustrations can be used to make technically accurate visual aids.
Cognitive behavioural therapy (CBT) has been used in the treatment of alcohol use disorder (AUD), generally in individual or group therapy, but not via computer.
Aim
This study examined the effectiveness of an interactive, personalised, computer-based CBT therapy in a randomised control trial.
Methods
We studied a group of 55 patients with AUD, randomised to either 5-hour-long computerised CBT sessions or a placebo cognitive-stimulating session, together with a 4-week inpatient rehabilitation treatment, and followed them for 3 months.
Results
There was a high degree of patient adherence to the protocol. Both groups did well, with a significant fall in alcohol outcome measures including number of drinks per drinking day, and number of drinking days, and an increase in abstinence rates in both groups to an equivalent level. The CBT group attended alcoholics anonymous groups more frequently, and had significant alterations in their alcohol self-efficacy outcomes, which correlated with their drinking outcomes. We concluded that computerised CBT is a potentially useful clinical tool that warrants further investigation in different treatment settings for AUD.