In this paper we present some preliminary ideas for the design of a continuous nonlinear neural networks with `learning.' Specifically, we introduce the idea of learning in Hopfield recursive neural networks. The network is trained so that application of a set of inputs produces the desired set of outputs. A method is developed to determine the interconnecting weights for the network, so as to achieve the desired stable equilibrium points. Also, this method illustrates a way to `learn' the interconnecting weights that are not computed a priori. Conditions are obtained for the asymptotic stability of the equilibrium points. An illustrative simulation is presented.