When an optimization problem is represented by its essential objective function, which incorporates constraints through infinite penalties, first- and secondorder conditions for optimality can be stated in terms of the first- and second-order epi-derivatives of that function. Such derivatives also are the key to the formulation of subproblems determining the response of a problem's solution when the data values on which the problem depends are perturbed. It is vital for such reasons to have available a calculus of epi-derivatives. This paper builds on a central case already understood, where the essential objective function is the composite of a convex function and a smooth mapping with certain qualifications, in order to develop differentiation rules covering operations such as addition of functions and a more general form of composition. Classes of "amenable" functions are introduced to mark out territory in which this sharper form of nonsmooth analysis can be carried out.