In this paper, we consider the problem of distributed sequential estimation of time invariant parameters in a network of cooperative agents. We study a system where the agents quantify their respective beliefs in the unknown parameters by approximations of the posteriors of the parameters with multivariate Gaussians. At every time instant each agent carries out three operations, (a) it receives private measurements distorted by additive noise, (b) it exchanges information about its belief in the estimated parameters with its neighbors, and (c) it updates its belief with the new information. Since we consider distributed processing in the network, it is challenging to provide an optimal strategy where the agents update their believes using the Bayes' rule in every iteration. In this work, instead, we propose a method which does not process the data based on Bayes theory and yet allows the agents to reach asymptotically the optimal Bayesian belief held by a fictitious fusion center. We provide convergence analysis of the method and demonstrate its performance by simulations.