Various disciplines, such as machine learning, statistics, data mining and artificial neural networks, are concerned with estimation of data-analytic models. A common theme among all these methodologies is estimation of predictive models from data. In our digital age, an abundance of data and cheap computing power offers hope of knowledge discovery via application of statistical and machine learning algorithms to empirical data. This data-analytic knowledge has similarities and differences with classical scientific knowledge. For example, any scientific theory can be viewed as an inductive theory because it generalizes over a finite number of observations (or experiments). The philosophical aspects of induction and knowledge discovery have been thoroughly explored in Western philosophy of science. This philosophical analysis dates back to Kant and Hume. Any knowledge involves a combination of hypotheses/ideas and empirical data. In the modern digital age, the balance between ideas (mental constructs) and observed data (facts) has completely shifted. Classical scientific knowledge was produced mainly by a stroke of genius (e.g., Newton, Maxwell, and Einstein). In contrast, much of modern knowledge in life sciences and social sciences is derived via data-analytic modeling. We argue that such data-driven knowledge can be properly described following the methodology of predictive learning originally developed in VC-theory. This paper presents a brief survey of the philosophical concepts related to inductive inference, and then extends these ideas to predictive data-analytic knowledge discovery. We contrast the differences between classical first-principle knowledge, data-analytic knowledge and beliefs. Several application examples are used to illustrate the differences between classical statistical and predictive learning approaches to data-analytic modeling. Finally, we discuss interpretation of data-analytic models under predictive learning framework.