With the developing remote sensing technology, hundreds of different wavelengths of hyperspectral images are obtained in the electromagnetic spectrum. LiDAR data, which gives altitude information, provides additional information for the area being imaged. In this study, there are two stages of solving the problem of semantic segmentation using these datasets, namely the information fusion and classification. In this study, firstly, morphological profile maps were produced from the hyperspectral LiDAR images in the Houston dataset, then these spectral data and morphological profiles were integrated through concatenation. Then, this data was filtered by the filters in the first convolution layer of AlexNet, which has a highly efficient deep convolutional architecture in image classification. Finally, this data was classified with a proposed deep convolutional neural network. Classification results are compared with the five methods proposed in the recent years, and it has been shown that our proposed method gives the best results among the competing methods.