Yıl: 2020 Cilt: 7 Sayı: 3 Sayfa Aralığı: 272 - 279 Metin Dili: İngilizce DOI: 10.30897/ijegeo.737993 İndeks Tarihi: 30-06-2021

Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries

Öz:
Segmentation is one of the most popular classification techniques which still have semantic labels. In this context, the segmentationof different objects such as cars, airplanes, ships, and buildings that are independent of background and objects such as land use andvegetation classes, which are difficult to discriminate from the background is considered. However, in image segmentation studies,various difficulties such as shadow, image blockage, a disorder of background, lighting, shading that cause fundamentalmodifications in the appearance of features are often encountered. With the development of technology, obtaining high spatialresolution satellite imageries and aerial photographs contain detailed texture information have been facilitated easily. Parallel to theseimprovements, deep learning architectures have widely been used to solved several computer vision tasks with an increasing level ofdifficulty. Thus, the regional characteristics, artificial and natural objects, can be perceived and interpreted precisely. In this study,two different subset data that were produced from a great open-source labeled image sets were used to segmentation of roads. Theused labeled data set consists of 150 satellite images of size 1500 x 1500 pixels at a 1.2 m resolution, which was not efficient fortraining. In order to avoid any problem, the imageries were divided into smaller dimensions. Selected images from the data setdivided into small patches of 256 x 256 pixels and 512 x 512 pixels to train the system, and comparisons between them were carriedout. To train the system using these datasets, two different artificial neural network architectures U-Net and Fully ConvolutionalNetworks (FCN), which are used for object segmentation on high-resolution images, were selected. When the test data with the samesize as the training data set were analyzed, approximately 97% extraction accuracy was obtained from high-resolution imageriestrained by FCN in 512 x 512 dimensions.
Anahtar Kelime:

Belge Türü: Makale Makale Türü: Araştırma Makalesi Erişim Türü: Erişime Açık
  • Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
  • Barghout, L., Lee, L. (2004). U.S. Patent Application No. 10/618,543.
  • Çelik, İ., Gazioğlu, C. (2020). Coastline Difference Measurement (CDM) Method, International Journal of Environment and Geoinformatics, 7(1): 1-5. doi. 10.30897/ijegeo.706792.
  • Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L. (2014). Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062.
  • Cheng, D., Liao, R., Fidler, S., Urtasun, R. (2019). Darnet: Deep active ray network for building segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7431-7439).
  • Cheng, G., Han, J. (2016). A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117, 11-28.
  • Chollet, F. (2018). Deep Learning with Python. Manning.
  • Digitalogy (2020). The Difference Between Artificial Intelligence, Machine Learning, And Deep Learning. Retrieved 15 May 2020 from https://blog.digitalogy.co/the-difference-betweenartificial-intelligence-machine-learning-and-deeplearning/
  • Erdem, F., Avdan, U. (2020). Comparison of Different U-Net Models for Building Extraction from HighResolution Aerial Imagery. International Journal of Environment and Geoinformatics (IJEGEO), 7(3): 221-227. DOI: 10.30897/ijegeo.
  • Esetlili, M., Bektas Balcik, F., Balik Sanli, F., Kalkan, K., Ustuner, M., Goksel, Ç., Gazioğlu, C., Kurucu, Y. (2018). Comparison of Object and Pixel-Based Classifications for Mapping Crops Using Rapideye Imagery: A Case Study of Menemen Plain, Turkey. International Journal of Environment and Geoinformatics, 5(2), 231-243. doi: 10.30897/ijegeo.442002.
  • Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48.
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Liu, W., Rabinovich, A., Berg, A. C. (2015). Parsenet: Looking wider to see better. arXiv preprint arXiv:1506.04579
  • Long, J., Shelhamer, E., Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D. (2020). Image Segmentation Using Deep Learning: A Survey. arXiv preprint arXiv:2001.05566.
  • Mnih, V. (2013). Machine learning for aerial image labelling (PhD thesis). University of Toronto, Toronto, Canada.
  • Nweke, H. F., Teh, Y. W., Al-Garadi, M. A., Alo, U. R. (2018). Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Systems with Applications, 105, 233-261.
  • Ronneberger, O., Fischer, P., Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 234-241). Springer, Cham.
  • Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, 234-241.
  • Wang, G., Li, W., Ourselin, S., Vercauteren, T. (2017, September). Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In International MICCAI brainlesion workshop (pp. 178-190). Springer, Cham.
APA Öztürk O, Sariturk B, Seker D (2020). Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. , 272 - 279. 10.30897/ijegeo.737993
Chicago Öztürk Ozan,Sariturk Batuhan,Seker Dursun Zafer Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. (2020): 272 - 279. 10.30897/ijegeo.737993
MLA Öztürk Ozan,Sariturk Batuhan,Seker Dursun Zafer Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. , 2020, ss.272 - 279. 10.30897/ijegeo.737993
AMA Öztürk O,Sariturk B,Seker D Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. . 2020; 272 - 279. 10.30897/ijegeo.737993
Vancouver Öztürk O,Sariturk B,Seker D Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. . 2020; 272 - 279. 10.30897/ijegeo.737993
IEEE Öztürk O,Sariturk B,Seker D "Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries." , ss.272 - 279, 2020. 10.30897/ijegeo.737993
ISNAD Öztürk, Ozan vd. "Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries". (2020), 272-279. https://doi.org/10.30897/ijegeo.737993
APA Öztürk O, Sariturk B, Seker D (2020). Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics, 7(3), 272 - 279. 10.30897/ijegeo.737993
Chicago Öztürk Ozan,Sariturk Batuhan,Seker Dursun Zafer Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics 7, no.3 (2020): 272 - 279. 10.30897/ijegeo.737993
MLA Öztürk Ozan,Sariturk Batuhan,Seker Dursun Zafer Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics, vol.7, no.3, 2020, ss.272 - 279. 10.30897/ijegeo.737993
AMA Öztürk O,Sariturk B,Seker D Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics. 2020; 7(3): 272 - 279. 10.30897/ijegeo.737993
Vancouver Öztürk O,Sariturk B,Seker D Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics. 2020; 7(3): 272 - 279. 10.30897/ijegeo.737993
IEEE Öztürk O,Sariturk B,Seker D "Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries." International Journal of Environment and Geoinformatics, 7, ss.272 - 279, 2020. 10.30897/ijegeo.737993
ISNAD Öztürk, Ozan vd. "Comparison of Fully Convolution Network (FCN) and Received Accepted U-Net for Road Segmentation from High Resolution Imageries". International Journal of Environment and Geoinformatics 7/3 (2020), 272-279. https://doi.org/10.30897/ijegeo.737993