The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule that equates the expectation values of certain functions with their empirical averages. There are, however, various other ways in which one can construct constraints from empirical data, which makes the maximum entropy principle lead to very different probability assignments. This paper shows that an argument by Jaynes to justify the usual constraint rule is unsatisfactory and investigates several alternative choices. The choice of a constraint rule is also shown to be of crucial importance to the debate on the question whether there is a conflict between the methods of inference based on maximum entropy and Bayesian conditionalization.
|Original language||English (US)|
|Number of pages||33|
|Journal||Studies in History and Philosophy of Science Part B - Studies in History and Philosophy of Modern Physics|
|State||Published - Mar 1996|
Bibliographical noteFunding Information:
DennisD ieks,Fred Muller and PieterV ermaasI. thankH enk Bos for helpi n thep roof of Theorem 1. It is a pleasureto thankJ eremyB utterfielda nd all otherm emberso f the Departmenot f History and Philosophyo f Sciencein Cambridgea, nd HarveyB rown at the Departmenfto r Philosophya t the Universityo f Oxford for hospitalitya nd encouragemenTth. is work was supportedb y a grant from the British Council and the NetherlandsO rganizationf or ScientificR esearch.