Human action recognition using depth information is a trending technology especially in human computer interaction. Depth information may provide more robust features to increase accuracy of action recognition. This paper presents an approach to recognize basic human actions using the depth information from RGB-D sensors. Features obtained from a trained skeletal model and raw depth data are studied. Angle and displacement features derived from the skeletal model were the most useful in classification. However, HOG descriptors of gradient and depth history images derived from depth data also improved classification performance when used with skeletal model features. Actions are classified with the random forest algorithm. The model is tested on MSR Action 3D dataset and compared with some of the recent methods in literature. According to the experiments, the proposed model produces promising results.