Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-26T05:27:55.325Z Has data issue: false hasContentIssue false

10 - Annealed On-line Learning in Multilayer Neural Networks

Published online by Cambridge University Press:  28 January 2010

Siegfried Bös
Affiliation:
Brain Science Institute, RIKEN Wako–shi, Saitama 351–0198, Japan
Shun-Ichi Amari
Affiliation:
Brain Science Institute, RIKEN Wako–shi, Saitama 351–0198, Japan
David Saad
Affiliation:
Aston University
Get access

Summary

Abstract

In this article we will examine online learning with an annealed learning rate. Annealing the learning rate is necessary if online learning is to reach its optimal solution. With a fixed learning rate, the system will approximate the best solution only up to some fluctuations. These fluctuations are proportional to the size of the fixed learning rate. It has been shown that an optimal annealing can make online learning asymptotically efficient meaning that asymptotically it learns as fast as possible. These results are until now only realized in very simple networks, like single–layer perceptrons (section 3). Even the simplest multilayer network, the soft committee machine, shows an additional symptom, which makes straightforward annealing uneffective. This is because, at the beginning of learning the committee machine is attracted by a metastable, suboptimal solution (section 4). The system stays in this metastable solution for a long time and can only leave it, if the learning rate is not too small. This delays the start of annealing considerably. Here we will show that a non–local or matrix update can prevent the system from becoming trapped in the metastable phase, allowing for annealing to start much earlier (section 5). Some remarks on the influence of the initial conditions and a possible candidate for a theoretical support are discussed in section 6. The paper ends with a summary of future tasks and a conclusion.

Introduction

One of the most attractive properties of artificial neural networks is their ability to learn from examples and to generalize the acquired knowledge to unknown data.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 1999

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×