ABSTRACT

The problem of representational form has always limited the applicability of cognitive models: where symbolic representations have succeeded, distributed representations have failed, and vice-versa. Hybrid modeling is thus a promising venue, which however brings its share of new problems. For instance, it doubles the number of necessary assumptions. To counter this problem, we believe that one network should generate the other. This would require specific assumptions for only one network. In the present project, we plan to use a recurrent network to generate a Bayesian network. The former will be used to model low-level cognition while the latter will represent higher-level cognition. Moreover, both models will be active in every task and will need to communicate in order to generate a unique answer.