TY - JOUR

T1 - A circuit complexity formulation of algorithmic information theory

AU - Wyeth, Cole

AU - Sturtivant, Carl

N1 - Publisher Copyright:
© 2023 Elsevier B.V.

PY - 2023/12/15

Y1 - 2023/12/15

N2 - Inspired by Solomonoff's theory of inductive inference (Solomonoff, 1964), we propose a prior based on circuit complexity. There are several advantages to this approach. First, it relies on a complexity measure that does not depend on the choice of Universal Turing machine (UTM). There is one universal definition for Boolean circuits involving a universal operation (e.g. NAND) with simple conversions to alternative definitions (with AND, OR, and NOT). Second, there is no analogue of the halting problem. The output value of a circuit can be calculated recursively by computer in time proportional to the number of gates, while a short program may run for a very long time. Our prior assumes that a Boolean function (or equivalently, Boolean string of fixed length) is generated by some Bayesian mixture of circuits. This model is appropriate for learning Boolean functions from partial information, a problem often encountered within machine learning as “binary classification.” We argue that an inductive bias towards simple explanations as measured by circuit complexity is appropriate for this problem.

AB - Inspired by Solomonoff's theory of inductive inference (Solomonoff, 1964), we propose a prior based on circuit complexity. There are several advantages to this approach. First, it relies on a complexity measure that does not depend on the choice of Universal Turing machine (UTM). There is one universal definition for Boolean circuits involving a universal operation (e.g. NAND) with simple conversions to alternative definitions (with AND, OR, and NOT). Second, there is no analogue of the halting problem. The output value of a circuit can be calculated recursively by computer in time proportional to the number of gates, while a short program may run for a very long time. Our prior assumes that a Boolean function (or equivalently, Boolean string of fixed length) is generated by some Bayesian mixture of circuits. This model is appropriate for learning Boolean functions from partial information, a problem often encountered within machine learning as “binary classification.” We argue that an inductive bias towards simple explanations as measured by circuit complexity is appropriate for this problem.

KW - Algorithmic information theory

KW - Boolean functions

KW - Circuit complexity

KW - Kolmogorov complexity

KW - Sequence prediction

KW - Solomonoff induction

UR - http://www.scopus.com/inward/record.url?scp=85173610915&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85173610915&partnerID=8YFLogxK

U2 - 10.1016/j.physd.2023.133925

DO - 10.1016/j.physd.2023.133925

M3 - Article

AN - SCOPUS:85173610915

SN - 0167-2789

VL - 456

JO - Physica D: Nonlinear Phenomena

JF - Physica D: Nonlinear Phenomena

M1 - 133925

ER -