With the recent developments in sensor technology including Microsoft Kinect, it has now become much easier to augment visual data with three-dimensional depth information. In this paper, we propose a new approach to RGB-D based topological place representation-building on bubble space. While bubble space representation is in principle transparent to the type and number of sensory inputs employed, practically, this has been only verified with visual data that are acquired either via a two degrees of freedom camera head or an omnidirectional camera. The primary contribution of this paper is of practical nature in this perspective. We show that bubble space representation can easily be used to combine RGB and depth data while affording acceptable recognition performance even with limited field of view sensing and simple features.