Learning with limited memory: Bayesianism vs heuristics

We study the classical sequential hypothesis testing problem (Wald, 1947), but add the memory constraint modelled by finite automata. Generically, the constrained optimum by Bayes rule is impossible to implement with any finite-state automata. We then introduce stochastic finite-state automata with memory constraint and study the constrained optimal rule. Two classes of information structure are considered: the model of breakthrough in which one signal fully reveals the state of nature but not others, and the model of decisional balance sheet where two signals are of similar strengths. In the first, randomization is strictly optimal whenever the memory constraint is binding and the optimum requires some learning. In the second, randomization is not optimal but the optimal finite-automaton uses qualitative probabilities.

Please sign up for meetings here: docs.google.com/spreadsheets/d/1Tf4YtDeDdmv3Dv379EyhWTL6lszs2Dy6yiff7yJeJAY/edit#gid=0