Yıl: 2020 Cilt: 5 Sayı: 3 Sayfa Aralığı: 138 - 143 Metin Dili: İngilizce DOI: 10.26833/ijeg.645426 İndeks Tarihi: 07-06-2021

FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN)

Öz:
Object detection and classification are among the most popular topics in Photogrammetry and Remote Sensing studies. With technological developments, a large number of high-resolution satellite images have been obtained and it has become possible to distinguish many different objects. Despite all these developments, the need for human intervention in object detection and classification is seen as one of the major problems.Machine learning has been used as a priority option to this day to reduce this need.Although success has been achieved with this method, human intervention is still needed. Deep learning provides a great convenience by eliminating this problem. Deep learning methods carry out the learning process on raw data unlike traditional machine learning methods. Although deep learning has a long history, the main reasons for its increased popularity in recent years are; the availability of sufficient data for the training process and the availability of hardware to process the data.In this study,a performance comparison was made between two different convolutionalneural network architectures (SegNet and Fully Convolutional Networks (FCN)) which are used for object segmentationand classificationon images. These two different modelswere trainedusing the same training dataset and their performances have been evaluated using the same test dataset. The results show that, for building segmentation,there is not much significant difference between these two architectures in terms of accuracy, but FCN architecture is more successful than SegNetby 1%. However, this situation may vary according to the dataset used during the training of the system.
Anahtar Kelime:

Belge Türü: Makale Makale Türü: Araştırma Makalesi Erişim Türü: Erişime Açık
  • F. (2018). Automatic extraction of building boundaries from high resolution images with active contour segmentation. International Journal of Engineering and Geosciences, 3(1), 37-42.
  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
  • Bozkurt S. (2018). Derin Ogrenme Algoritmalari Kullanilarak Cay Alanlarının Otomatik Segmentasyonu (Master’s Thesis). YTU, Istanbul.
  • Chen, Q., Wang, L., Wu, Y., Wu, G., Guo, Z., & Waslander, S. L. (2019). Aerial imagery for roof segmentation: A large-scale dataset towards automatic mapping of buildings. ISPRS journal of photogrammetry and remote sensing, 147, 42-55.
  • Comert, R., Kucuk, D., & Avdan, U. (2019). Object Based Burned Area Mapping with Random Forest Algorithm. International Journal of Engineering and Geosciences, 4(2), 78-87.
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke U., Roth S. & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei- Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). IEEE.
  • De Souza W. (2017). Semantic Segmentation using Fully Convolutional Neural Networks. Retrieved 19.03.2020, from https://medium.com/@wilburdes/semanticsegmentation- using-fully-convolutional-neuralnetworks- 86e45336f99b
  • Du, Z., Yang, J., Huang, W., & Ou, C. (2018). Training SegNet for cropland classification of high resolution remote sensing images. In AGILE Conference.
  • Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2), 303-338.
  • Guo, Z., Shao, X., Xu, Y., Miyazaki, H., Ohira, W., & Shibasaki, R. (2016). Identification of village building via Google Earth images and supervised machine learning methods. Remote Sensing, 8(4), 271.
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  • LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.
  • Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar P. & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham.
  • Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Ma, L., Li, M., Ma, X., Cheng, L., Du, P., & Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 277-293.
  • Maggiori, E., Tarabalka, Y., Charpiat, G., & Alliez, P. (2017). Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 3226-3229). IEEE.
  • Sevgen, S. C. (2019). Airborne lidar data classification in complex urban area using random forest: a case study of Bergama, Turkey. International Journal of Engineering and Geosciences, 4(1), 45-51.
  • Tasdemir, S., & Ozkan, I. A. (2019). Ann approach for estimation of cow weight depending on photogrammetric body dimensions. International Journal of Engineering and Geosciences, 4(1), 36-44.
  • URL-1, 2012, http://www.imagenet. org/challenges/LSVRC/2012/results.html, [26.03.2020]
  • URL-2, 2017, https://meetshah1995.github.io/semanticsegmentation/ deeplearning/ pytorch/visdom/2017/06/01/semanticsegmentation- over-the-years.html, [19.03.2020].
  • URL-3, 2020, https://towardsdatascience.com/implementing-a-fullyconvolutional- network-fcn-in-tensorflow-2- 3c46fb61de3b, [19.03.2020].
  • URL-4, http://www.deeplearning.net/tutorial/fcn_2D_segm.html, [19.03.2020]
  • Vakalopoulou, M., Karantzalos, K., Komodakis, N., & Paragios, N. (2015). Building detection in very high resolution multispectral data with deep learning features. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 1873-1876). IEEE.
  • Wu, G., Guo, Z., Shi, X., Chen, Q., Xu, Y., Shibasaki, R., & Shao, X. (2018a). A boundary regulated network for accurate roof segmentation and outline extraction. Remote Sensing, 10(8), 1195.
  • Wu, G., Shao, X., Guo, Z., Chen, Q., Yuan, W., Shi, X., Xu Y. & Shibasaki, R. (2018b). Automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks. Remote Sensing, 10(3), 407.
APA Sariturk B, Bayram B, Duran Z, Seker D (2020). FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). , 138 - 143. 10.26833/ijeg.645426
Chicago Sariturk Batuhan,Bayram Bülent,Duran Zaide,Seker Dursun Zafer FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). (2020): 138 - 143. 10.26833/ijeg.645426
MLA Sariturk Batuhan,Bayram Bülent,Duran Zaide,Seker Dursun Zafer FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). , 2020, ss.138 - 143. 10.26833/ijeg.645426
AMA Sariturk B,Bayram B,Duran Z,Seker D FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). . 2020; 138 - 143. 10.26833/ijeg.645426
Vancouver Sariturk B,Bayram B,Duran Z,Seker D FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). . 2020; 138 - 143. 10.26833/ijeg.645426
IEEE Sariturk B,Bayram B,Duran Z,Seker D "FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN)." , ss.138 - 143, 2020. 10.26833/ijeg.645426
ISNAD Sariturk, Batuhan vd. "FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN)". (2020), 138-143. https://doi.org/10.26833/ijeg.645426
APA Sariturk B, Bayram B, Duran Z, Seker D (2020). FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). International Journal of Engineering and Geosciences, 5(3), 138 - 143. 10.26833/ijeg.645426
Chicago Sariturk Batuhan,Bayram Bülent,Duran Zaide,Seker Dursun Zafer FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). International Journal of Engineering and Geosciences 5, no.3 (2020): 138 - 143. 10.26833/ijeg.645426
MLA Sariturk Batuhan,Bayram Bülent,Duran Zaide,Seker Dursun Zafer FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). International Journal of Engineering and Geosciences, vol.5, no.3, 2020, ss.138 - 143. 10.26833/ijeg.645426
AMA Sariturk B,Bayram B,Duran Z,Seker D FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). International Journal of Engineering and Geosciences. 2020; 5(3): 138 - 143. 10.26833/ijeg.645426
Vancouver Sariturk B,Bayram B,Duran Z,Seker D FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN). International Journal of Engineering and Geosciences. 2020; 5(3): 138 - 143. 10.26833/ijeg.645426
IEEE Sariturk B,Bayram B,Duran Z,Seker D "FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN)." International Journal of Engineering and Geosciences, 5, ss.138 - 143, 2020. 10.26833/ijeg.645426
ISNAD Sariturk, Batuhan vd. "FEATURE EXTRACTION FROM SATELLITE IMAGES USING SEGNET AND FULLY CONVOLUTIONAL NETWORKS (FCN)". International Journal of Engineering and Geosciences 5/3 (2020), 138-143. https://doi.org/10.26833/ijeg.645426