In this paper, we study the effect of randomness in the decision making of agents on social learning. In the addressed system, the agents make decisions sequentially about the true state of nature. Each agent observes a signal produced according to one of two hypotheses, which represents the state of nature. The signals of all the agents are generated independently from the same state. The agents also know the decisions of all the previous agents in the network. The randomness in this paper is modeled by a policy that amounts to random mapping of the beliefs of the agents to the action space. We propose that the agents learn from the decisions of the previous agents and update their beliefs by using the Bayesian theory. We define the concept of social belief about the truthfulness of the two hypotheses and provide results on the convergence of the social belief. We also prove that with the proposed random policy, information cascade can be avoided and asymptotic learning occurs. We apply the random policy to data models that represent the observations by a distribution belonging to the exponential family. We then provide performance and convergence analysis of the proposed method as well as simulation results that include comparisons with deterministic and hybrid policies.
|Original language||English (US)|
|Number of pages||10|
|Journal||IEEE Transactions on Signal Processing|
|State||Published - Jun 15 2015|
Bibliographical notePublisher Copyright:
© 1991-2012 IEEE.
- Bayesian learning
- Social learning
- asymptotic learning
- information cascade
- random decision making