Linear discriminant analysis (LDA) is a classical supervised subspace learning technique that has wide applications. However, it is designed for vector only, which cannot exploit the tensor structures and may lead to suboptimal results when dealing with tensorial data. To address this problem, several multilinear LDA (MLDA) methods have been proposed to learn the subspaces from tensors. By exploiting the tensor structures, they achieve compact subspace representations, reduced parameter sizes, and improved robustness against the small sample size problem. However, existing MLDA methods do not take data uncertainty into account, fail to converge properly, or have to introduce additional tuning parameters for good convergence properties. In this paper, we therefore solve these limitations by proposing a probabilistic MLDA method for matrix inputs. Specifically, we propose a new generative model to incorporate structural information into the probabilistic framework, where each observed matrix is represented as a linear combination of collective and individual rank-one matrices. This provides our method with both the expressiveness of capturing discriminative features and nondiscriminative noise, and the capability of exploiting the 2-D tensor structures. To overcome the convergence problem of existing MLDAs, we develop an EM-type algorithm for parameter estimation, which has closed-form solutions with convergence guarantees. Experimental results on real-world datasets show the superiority of the proposed method to other probabilistic and MLDA variants.