Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-10T11:47:50.697Z Has data issue: false hasContentIssue false

30 - Prospects for Transforming Schools with Technology- Supported Assessment

Published online by Cambridge University Press:  05 June 2012

Barbara Means
Affiliation:
SRI International
R. Keith Sawyer
Affiliation:
Washington University, St Louis
Get access

Summary

The last two decades have been marked by great expectations – and some important progress – in bringing technology to bear on the process of schooling. True, the more dramatic predictions of two decades ago concerning the impact of Information Age technology on schooling (see for example, Milken Family Foundation, 1999; Office of Technology Assessment, 1995) have not come to pass. But even so, the cup is half full. Today the average student:computer ratio is 5:1 in American schools (National Center for Education Statistics, 2003), and the use of technology for student research and report preparation has become commonplace. But arguably the biggest “buzz” in the educational technology community today is around the use of technology to improve assessment. School reformers, technology enthusiasts, and business interests have all identified assessment as an area with great potential for increased classroom use of technology (Bennett, 2002; CEO Forum on Education and Technology, 2001; CoSN, 2005; Education Week, 2003).

Although enthusiasm for a marriage of classroom assessment and technology is widespread, there are two competing visions of the purpose and nature of effective classroom assessments, each with different implications for the role of technology. One vision calls for connecting classroom assessment practices more closely to state-mandated content standards and accountability systems. The other vision, which draws heavily on recent advances in the learning sciences, calls for using technology to develop and deliver assessments that are integrated into day-to-day instruction and that enable teachers to gain deeper insights into their students' thinking and to adapt their instruction accordingly.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2005

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bass, K. M., & Glaser, R. (2004). Developing assessments to inform teaching and learning. CSE Report 628. Los Angeles: National Center for Research on Evaluation Standards, and Student Testing, University of California, Los Angeles.
Bennett, R. E. (2002). Inexorable and inevitable: The continuing story of technology and assessment. Journal of Technology, Learning, and Assessment, 1(1). Available from http://www.jtla.org, accessed November 22, 2005.Google Scholar
Black, P., & Harrison, C. (2001). Feedback in questioning and marking: The science teacher's role in formative assessment. School Science Review, 82(301), 55–61.Google Scholar
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment and Education, 5(1), 7–74.CrossRefGoogle Scholar
Bloom, B. S. (1976). Human characteristics and school learning. New York: McGraw-Hill.Google Scholar
Borja, R. R. (2003). Prepping for the big test. Technology Counts 2003, 22(35), 23–24, 26.Google Scholar
Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, and experience. Washington, DC: National Academy Press.Google Scholar
Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. In Iran-Nejad, A. & Pearson, P. D. (Eds.), Review of research in education (Vol. 24, pp. 61–100). Washington, DC: American Educational Research Association.Google Scholar
Carmazza, A., McCloskey, M., & Green, B. (1981). Naïve beliefs in “sophisticated” subjects: Misconceptions about trajectories of objects. Cognition, 9, 117–123.CrossRefGoogle Scholar
CEO Forum on Education and Technology. (2001). School technology and readiness – Key building blocks for achievement in the 21st century: Assessment, alignment, access, analysis. Available at www.ceoforum.org/downloads/report4.pdf, accessed November 22, 2005.
Champagne, A. B., Klopfer, L. E., & Anderson, J. H. (1980). Factors influencing the learning of classical mechanics. American Journal of Physics, 8, 1074–1075.CrossRefGoogle Scholar
Chi, M. T. H., & Slotta, J. D. (1993). Ontological coherence of intuitive physics. Cognition and Instruction, 10(2&3), 249–260.CrossRefGoogle Scholar
CoSN (Consortium for School Networking). (2005). From vision to action: How school districts use data to improve performance. Washington, DC: Author.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. The Physics Teacher, 69, 970–977.Google Scholar
Donovan, M. S., & Bransford, J. D. (2005). How students learn history, mathematics, and science in the classroom. Washington, DC: National Academy Press.Google Scholar
Donovan, M. S., Bransford, J. D., & Pellegrino, J. W. (1999). How people learn: Bridging research and practice. Washington, DC: National Academy Press.Google Scholar
Dyson, E. (2004, October). Held back: The market for software in our schools. Release 1.0. Esther Dyson's Monthly Report. Available at http://www.release1-0.com, accessed November 22, 2005.
Education Week. (2003). Pencils down: Technology's answer to testing. Technology Counts 2003, 22(35), 8, 10.
Excelsior Software. (2004, September). Advertisement for Pinnacle Plus Assessment Management System. eSchool News, p. 31.
Fagen, A. P., Crouch, C. H., & Mazur, E. (2002). Peer instruction: Results from a range of classrooms. The Physics Teacher, 40, 206–207.CrossRefGoogle Scholar
Frederiksen, J. R., & Collins, A. (1989). A systems approach to educational testing. Educational Researcher, 18, 27–32.CrossRefGoogle Scholar
Griffin, S., & Case, R. (1997). Re-thinking the primary school math curriculum: An approach based on cognitive science. Issues in Education, 3(1), 1–49.Google Scholar
Hartline, F. (1997). Analysis of 1st semester of Classtalk use at McIntosh Elementary School. Yorktown, VA: Better Education.Google Scholar
Keller, J. M. (1983). Motivational design of instruction. In Reigeluth, C. (Ed.), Instructional-design theories and models: An overview of their current status. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
Koch, M., & Sackman, M. (2004). Assessment in the palm of your hand. Science and Children, 33(9), 33–37.Google Scholar
Mazur, E. (1997). Peer instruction: A user's manual. Upper Saddle River, NJ: Prentice Hall.Google Scholar
McCloskey, M. (1983). Naïve theories of motion. In Genuner, D. & Stevens, A. I. (Eds.), Mental models (pp. 299–324). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
McCloskey, M., & Kohl, D. (1983). Naïve physics: The curvilinear impetus principle and its role in interactions with moving objects. Journal of Experimental Psychology Learning, Memory, and Cognition, 9, 146–156.CrossRefGoogle ScholarPubMed
McTighe, J., & Seif, E. (2003). A summary of underlying theory and research base for understanding by design. Unpublished manuscript.Google Scholar
Means, B., Roschelle, J., Penuel, W., Sabelli, N., & Haertel, G. (2004). Technology's contribution to teaching and policy: Efficiency, standardization, or transformation? In Floden, R. E. (Ed.), Review of Research in Education (Vol. 27, pp. 159–181). Washington, DC: American Educational Research Association.Google Scholar
Milken Family Foundation. (1999). Transforming learning through technology: Policy roadmaps for the nation's governors. Santa Monica, CA: Author.
Minstrell, J. (1999). Facets of student understanding and assessment development. In Pellegrino, J. W., Jones, L. R., & Mitchell, K. (Eds.), Grading the nation's report card: Research from the evaluation of NAEP. Washington, DC: National Academy Press.Google Scholar
Mislevy, R. J., Hamel, L., Fried, R., Gaffney, T., Haertel, G., Hafter, A.. (2003). Design patterns for assessing science inquiry. PADI Technical Report 1. Menlo Park, CA: SRI International.Google Scholar
Mislevy, R. J., Steinberg, L. S., Almond, R. G., Haertel, G. D., & Penuel, W. (2003). Improving educational assessment. In Haertel, G. D. & Means, B. (Eds.), Evaluating educational technology: Effective research designs for improving learning (pp. 149–180). New York: Teachers College Press.Google Scholar
National Center for Education Statistics (NCES), U.S. Department of Education. (2003). Internet access in U.S. public schools and classrooms, 1994–2002. Washington, DC: Author.
National Research Council (2001). Classroom Assessment and the National Science Education Standards. Washington, DC: National Academy Press.
Office of Technology Assessment, U.S. Congress. (1995). Education and Technology: Future Visions. OTA-BP-HER-169. Washington, DC: U. S. Government Printing Office.
Olson, L. (2003). Legal twists, digital turns. Technology Counts 2003, 22(35), 11–14, 16.Google Scholar
Pearson Education. (2005, February). Advertisement for Progress Assessment Series. ESchool News, p. 9.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.) (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.Google Scholar
Sadler, P. M. (1987). Alternative conceptions in astronomy. In Novak, J. D. (Ed.), Second international seminar on misconception and educational strategies in science and mathematics (Vol. 3, pp. 422–425). Ithaca, NY: Cornell University Press.Google Scholar
Sadler, P. M. (1998). Psychometric models of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. Journal of Research in Science Teaching, 35(3), 265–296.3.0.CO;2-P>CrossRefGoogle Scholar
Schmidt, W. H., Raizen, S., Britton, E. D., Bianchi, L. J., & Wolfe, R. G. (1997). Many visions, many aims: Volume II: A cross-national investigation of curricular intentions in school science. London: Kluwer.Google Scholar
Schofield, J. W., Eurich-Fulcer, R., & Britt, C. L. (1994). Teachers, computer tutors, and teaching: The artificially intelligent tutor as an agent of classroom change. American Educational Research Journal, 31 (3), 579–607.CrossRefGoogle Scholar
Shepard, L. (1997). Insights gained from a classroom-based assessment project. CSE Technical Report 451. Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Shepard, L. A. (2000). The role of assessment in a learning culture. Presidential address at the annual meeting of the American Educational Research Association, New Orleans, April 26. Available at aera.net/pubs/er/arts/29-07/shep02.htm.CrossRef
Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Education Review, 57, 1–22.CrossRefGoogle Scholar
Stevens, R., Soller, A., Cooper, M., and Sprang, M. (2004). Modeling the development of problem solving skills in chemistry with a web-based tutor. In Lester, J. C., Vicari, R. M., & Paraguaca, F. (Eds.), Intelligent Tutoring Systems. (pp. 580–591). Heidelberg, Germany: Springer-Verlag.CrossRefGoogle Scholar
Underdahl, J., Palacio-Cayetano, J., & Stevens, R. (2001). Practice makes perfect: Assessing and enhancing knowledge and problem solving skills with IMMEX software. Learning and Leading with Technology, 28, 26–31.Google Scholar
U.S. Department of Education, Office of Educational Technology. (2004). Toward a new golden age in american education: How the internet, the law and today's students are revolutionizing expectations. Washington, DC: U. S. Department of Education.
Vendlinski, T., & Stevens, R. (2000). The use of artificial neural nets (ANN) to help evaluate student problem-solving strategies. In Fishman, B. & O'Connor-Divelbiss, S. (Eds.), Proceedings of the fourth international conference of the learning sciences (pp. 108–114). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
White, B., & Frederiksen, J. (2000). Metacognitive facilitation: An approach to making scientific inquiry accessible to all. In Minstrell, J. and Zee, E. (Eds.), Inquiring into inquiry learning and teaching in science. (pp. 331–370). Washington, DC: American Association for the Advancement of Science.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×