Published online by Cambridge University Press: 24 May 2022
One challenge in explaining neural evolution is the formal equivalence of different computational architectures. If a simple architecture suffices, why should more complex neural architectures evolve? The answer must involve the intense competition for resources under which brains operate. I show how recurrent neural networks can be favored when increased complexity allows for more efficient use of existing resources. Although resource constraints alone can drive a change, recurrence shifts the landscape of what is later evolvable. Hence organisms on either side of a transition boundary may have similar cognitive capacities but very different potential for evolving new capacities.