We consider a distributed optimization probiert! whereby two nodes S 1, S2 wish to jointly minimize a common convex quadratic cost function, f(x1, x2), subject to separate local constraints on x1 and x2, respectively. Suppose that node S1 has control of variable x1 only and node S2 has control of variable x2 only. The two nodes locally update their respective variables and periodically exchange their values over a noisy channel. Previous studies of this problem have mainly focused on the convergence issue and the analysis of convergence, rate. In this work, we focus on the communication energy and study its impact on convergence, In particular, we consider a class of distributed stochastic gradient type algorithms implemented using certain linear analog messaging schemes. We study the minimum amount of communication energy required for the two nodes to compute an ε-minimizer of f(x1, x2) in the mean square sense. Our analysis shows that the communication energy must grow at least at the rate of Ω(ε-1). We also derive specific designs which attain this minimum energy bound, and provide simulation results that confirm our theoretical analysis. Extension to the multiple node case is described.