Abstract
In this paper we present a modification of the popular Black-Box Variational Inference (BBVI) approach which significantly improves the computational efficiency of the inference. We achieve this performance boost by replacing the standard gradient in the stochastic gradient ascent framework of BBVI with the natural gradient. Our experimental results (e.g. training of neutral networks) show that the proposed method outperforms the original BBVI algorithm on both synthetic and real data.