55 10

Proje Grubu: EEEAG Sayfa Sayısı: 127 Proje No: 120E100 Proje Bitiş Tarihi: 01.03.2023 Metin Dili: Türkçe DOI: 120E100 İndeks Tarihi: 25-01-2024

Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu

Öz:
İnsan öğrenmesi artımlıdır(tüm bilgiler tek seferde verilmez) ve basit örneklerle başlar. Eğitim sistemlerimiz bu hipotez üzerine kurulmuştur. Önce temel / basit konular kavratılır, ardından zor konulara geçilir. Makinelerin öğrenmesini amaçlayan yapay öğrenme alanında da bu hipotez kullanılarak planlı öğrenme (curriculum learning) ve türevleri geliştirilmiştir. Planlı öğrenme iteratif eğitime sahip algoritmalarda eğitim örneklerinin sürece hangi sırada dahil edileceği sorusuna cevap arar. Bu projenin de temel araştırma sorusu budur. Bu kapsamda yenilikçi planlı öğrenme yöntemlerinin geliştirilmesi, yapay örnek üretim yöntemlerinin geliştirilmesi, yapay örnek içeren veri kümeleri için planlı öğrenme yöntemlerinin geliştirilmesi, YSA?lar harici algoritmalar için planlı öğrenmenin etkisinin incelenmesi, geliştirilen algoritmaların dağıtık öğrenme ile gerçeklenmesi ve geliştirilen algoritmaların teorik analizlerinin yapılması konularında çalışmalar metin veri kümeleri üzerinde başarı ile yürütülmüştür. Proje çıktısı olan yöntem, yazılım ve yayınlara https://github.com/projectSOTS/ adresinden erişilebilir.
Anahtar Kelime: Planlı Öğrenme Derin Öğrenme Yapay Sinir Ağları Yapay Örnek Üretimi Sinirsel Dil Modelleri

Sort Optimization of Training Samples in Machine Learning

Öz:
Human learning is incremental (not all information is given at once) and starts with simple examples. Our education systems are built on this hypothesis. First, basic / simple subjects are taught, then difficult subjects are passed. Curriculum learning and its derivatives have been developed by using this hypothesis in the field of machine learning. In algorithms with iterative training, curriculum learning seeks to answer the question in which order the training samples will be included in the training process. This is the main research question of this project. The topics studied in this context are as follows: the development of innovative curriculum learning methods, the development of data augmentation methods, the development of curriculum learning methods for augmented datasets, the examination of the effect of curriculum learning for algorithms other than ANNs, the implementation of the developed algorithms with distributed learning and the theoretical analysis of the developed algorithms. Text datasets were used for validation of the proposed methods. The proposed methods, software and publications can be accessed at https://github.com/projectSOTS/.
Anahtar Kelime: Planlı Öğrenme Derin Öğrenme Yapay Sinir Ağları Yapay Örnek Üretimi Sinirsel Dil Modelleri

Erişim Türü: Erişime Açık
  • Abadi, M., ve diğerleri. «TensorFlow: A System for Large-Scale Machine Learning.» In OSDI. 2016. Vol. 16, pp. 265-283.
  • Amasyali, M.F. «Improved Space Forest: A Meta Ensemble Method.» IEEE Transactions on Cybernetics, 2019: 49(3), 816-826.
  • Avramova, V. Curriculum Learning with Deep Convolutional Neural Networks. Yüksek Lisans Tezi, Kth Royal Instıtute Of Technology, School Of Computer Scıence And Communication, 2015.
  • Barandela, R., J. S. Sánchez, V. Garcıa, ve E. Rangel. «Strategies for learning in class imbalance problems.» Pattern Recognition, 2003: 36(3), 849-851.
  • Batista, G. E., R. C. Prati, ve M. C. Monard. «A study of the behavior of several methods for balancing machine learning training data.» ACM Sigkdd Explorations Newsletter, 2004: 6(1), 20-29.
  • Bengio, Y., J. Louradour, R. Collobert, ve J. Weston. «Curriculum learning.» 26th annual international conference on machine learning. 2009. 41-48.
  • Blei, D. M., A. Y. Ng, ve M. I. Jordan. «Latent dirichlet allocation.» Journal of machine Learning research, 2003: 3(Jan), 993-1022.
  • Bojanowski, P., E. Grave, A. Joulin, ve T. Mikolov. «Enriching word vectors with subword information.» Transactions of the Association for Computational Linguistics, 2017: 5, 135- 146.
  • Bollacker, K., C. Evans, P. Paritosh, T. Sturge, ve J. & Taylor. «Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge.» In Proceedings of the 2008 ACM, SIGMOD International Conference on Management of Data. 2008. 1247–1250.
  • Boyd, S., N. Parikh, E. Chu, B. Peleato, ve J. Eckstein. «Distributed optimization and statistical learning via the alternating direction method of multipliers.» Foundations and Trends® in Machine learning, 2011: 3(1), 1-122.
  • Brown, T., ve diğerleri. «Language models are few-shot learners.» Advances in neural information processing systems. 2020. 33, 1877-1901.
  • Carlson, A., J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr, ve T. M. Mitchell. «Toward an Architecture for Never-Ending Language Learning.» In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence. AAAI Press, 2010.
  • Cauwenberghs, G., ve T. Poggio. «Incremental and decremental support vector machine learning.» In Advances in neural information processing systems. 2001. 409-415.
  • Chawla, N. V., K. W. Bowyer, L. O. Hall, ve W. P. Kegelmeyer. «SMOTE: synthetic minority over- sampling technique.» Journal of artificial intelligence research, 2002: 16, 321-357.
  • Chawla, N. V., K. W. Bowyer, L. O. Hall, ve W. P. Kegelmeyer. «SMOTE: synthetic minority over- sampling technique.» Journal of artificial intelligence research, 2002: 16:321-357.
  • Chollet, F. Keras: Deep learning library for theano and tensorflow. 2016. https://keras.io (erişildi: 2023).
  • Choromanska, A., M. Henaff, M. Mathieu, G. B. Arous, ve Y. LeCun. «The loss surfaces of multilayer networks.» In Artificial Intelligence and Statistics. 2015. 192-204.
  • Cireşan, D. C., U. Meier, L. M. Gambardella, ve J. Schmidhuber. «Deep, big, simple neural nets for handwritten digit recognition.» Neural computation, 2010: 22(12), 3207-3220.
  • Collobert, R., K. Kavukcuoglu, ve C. Farabet. «Torch7: A matlab-like environment for machine learning.» In BigLearn, NIPS Workshop. 2011.
  • Cortes, C., ve V. Vapnik. «Support vector machine.» Machine learning, 1995: 20(3), 273-297.
  • Dauphin, Y. N., R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, ve Y. Bengio. «Identifying and attacking the saddle point problem in high-dimensional non-convex optimization.» In Advances in neural information processing systems. 2014. 2933-2941.
  • Dean, J., ve diğerleri. «Large scale distributed deep networks.» In Advances in neural information processing systems. 2012. 1223-1231.
  • Devlin, J., M. W. Chang, K. Lee, ve K. & Toutanova. «Bert: Pre-training of deep bidirectional transformers for language understanding.» arXiv:1810.04805. 2018. https://arxiv.org/pdf/1810.04805.pdf (erişildi: 2023).
  • Dos Santos, C. N., ve M. Gatti. «Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts.» In COLING. 2014. 69-78.
  • Duchi, J., E. Hazan, ve Y. Singer. «Adaptive subgradient methods for online learning and stochastic optimization.» Journal of Machine Learning Research, 2011: 12(Jul), 2121- 2159.
  • Dumais, S. T. «Latent semantic analysis.» Annual review of information science and technology, 2004: 38(1), 188-230.
  • Erhan, D., Y. Bengio, A. Courville, P. A. Manzagol, P. Vincent, ve S. Bengio. «Why does unsupervised pre-training help deep learning?» Journal of Machine Learning Research, 2010: 11(Feb), 625-660.
  • Frigui, H., ve R. Krishnapuram. «A robust competitive clustering algorithm with applications in computer vision.» Ieee transactions on pattern analysis and machine intelligence, 1999: 21(5), 450-465.
  • Ganitkevitch, J., B. Van Durme, ve C. & Callison-Burch. «PPDB: The paraphrase database.» In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2013. 758-764.
  • Glorot, X., ve Y. & Bengio. «Understanding the difficulty of training deep feedforward neural networks.» In Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010. 249-256.
  • Gong, Tieliang, Qian Zhao, Deyu Meng, ve Zongben Xu. «Why curriculum learning & self-paced learning work in big/noisy data: A theoretical perspective.» Big Data & Information Analytics, 2016: 1 (1) : 111-127.
  • Goodfellow, I. J., J. Shlens, ve C. Szegedy. «Explaining and harnessing adversarial examples.» International Conference on Learning Representations. 2015.
  • Goodfellow, I. J., O. Vinyals, ve A. M. Saxe. «Qualitatively characterizing neural network optimization problems.» International Conference on Learning Representations. 2015.
  • Goodfellow, I., ve diğerleri. «Generative adversarial nets.» In Advances in neural information processing systems. 2014. 2672-2680.
  • Gorski, J., F. Pfeuffer, ve K. Klamroth. «Biconvex sets and optimization with biconvex functions: a survey and extensions.» Mathematical Methods of Operations Research, 2007: 66(3), 373–407.
  • Gupta, A., A. Agarwal, P. Singh, ve P. Rai. «A deep generative framework for paraphrase generation.» In Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
  • Hacohen, G.,, ve D. Weinshall. «On the power of curriculum learning in training deep networks.» In International Conference on Machine Learning. PMLR, 2019. 2535-2544.
  • Han, B., ve diğerleri. «Co-teaching: Robust training of deep neural networks with extremely noisy labels.» Advances in neural information processing systems. 2018.
  • Han, H., W. Y. Wang, ve M. H. Mao. «Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning.» In international conference on intelligent computing. 2005. 878-887.
  • Hazan, E. «Lecture Notes: Optimization for Machine Learning.» arXiv preprint. 2019. https://arxiv.org/abs/1909.03550 (erişildi: 2023).
  • Hazan, E., A. Agarwal, ve S. Kale. «Logarithmic regret algorithms for online convex optimization.» Machine Learning, 2007: 69(2-3), 169-192.
  • Hinton, G. E., ve R. R. Salakhutdinov. «Reducing the dimensionality of data with neural networks.» Science, 2006: 313(5786), 504-507.
  • Ho, Q., ve diğerleri. «More effective distributed ml via a stale synchronous parallel parameter server.» In Advances in neural information processing systems. 2013. 1223-1231.
  • Hochreiter, S., ve J. Schmidhuber. «Long short-term memory.» Neural computation, 1997: 9(8), 1735-1780.
  • Hoffman, M., F. R. Bach, ve D. M. Blei. «Online learning for latent dirichlet allocation.» In advances in neural information processing systems. 2010. 856-864.
  • Howard, J., ve S. Ruder. «Universal language model fine-tuning for text classification.» arXiv preprint. 2018. https://arxiv.org/abs/1801.06146 (erişildi: 2023).
  • Hu, B., Z. Lu, H. Li, ve Q. Chen. «Convolutional neural network architectures for matching natural language sentences.» In Advances in neural information processing systems. 2014. 2042- 2050.
  • Hunter, D. R., ve K. & Lange. «A tutorial on MM algorithms.» The American Statistician, 2004: 58(1), 30-37.
  • ian, X., Y. Huang, Y. Li, ve J. Liu. «Asynchronous parallel stochastic gradient for nonconvex optimization.» In Advances in Neural Information Processing Systems. 2015. 2737-2745.
  • Jiang, L., D. Meng, Q. Zhao, S. Shan, ve A. G. Hauptmann. «Self-Paced Curriculum Learning.» AAAI Conference on Artificial Intelligence. 2015. Vol. 29. No. 1.
  • Jiang, L., D. Meng, S. I. Yu, Z. Lan, S. Shan, ve A. Hauptmann. «Self-paced learning with diversity.» In Advances in Neural Information Processing Systems. 2014. 2078-2086.
  • Kawaguchi, K., ve H. Lu. «Ordered sgd: A new stochastic optimization framework for empirical risk minimization.» In International Conference on Artificial Intelligence and Statistics. PMLR, 2020. 669-679.
  • Kingma, D., ve J. Ba. «Adam: A method for stochastic optimization.» International Conference on Learning Representations. 2015.
  • Kohonen, T. «Learning vector quantization.» In Self-organizing maps içinde, yazan T. Kohonen, 175-189. Springer, 1995.
  • —. «The self-organizing map.» Proceedings of the IEEE. 1990. 78(9), 1464-1480.
  • Krizhevsky, A., I. Sutskever, ve G. E. Hinton. «Imagenet classification with deep convolutional neural networks.» In Advances in neural information processing systems. 2012. 1097- 1105.
  • Kumar, M. P., B. Packer, ve D. Koller. «Self-paced learning for latent variable models.» In Advances in Neural Information Processing Systems. 2010. 1189-1197.
  • Lichman, M. UCI Machine Learning Repository. 2013. http://archive.ics.uci.edu/ml (erişildi: 2022).
  • McCann, B., N. S. Keskar, C. Xiong, ve R. & Socher. «The natural language decathlon: Multitask learning as question answering.» arXiv preprint. 2018. https://arxiv.org/abs/1806.08730 (erişildi: 2023).
  • Mermer, M. N., ve M. F. Amasyali. «Scalable Curriculum Learning for Artificial Neural Networks.» IPSI BGD Transactıons On Internet Research, 2017: 13(2).
  • Mikolov, T., I. Sutskever, K. Chen, G. S. Corrado, ve J. Dean. «Distributed representations of words and phrases and their compositionality.» In Advances in neural information processing systems. 2013. 3111-3119.
  • Miller, G. A. «WordNet: a lexical database for English.» Communications of the ACM, 1995: 38(11), 39-41.
  • Pennington, J., R. Socher, ve C. Manning. «Glove: Global vectors for word representation.» In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. 1532-1543.
  • Radford, A., L. Metz, ve S. Chintala. «Unsupervised representation learning with deep convolutional generative adversarial networks.» International Conference on Learning Representations. 2015.
  • Raffel, C., ve diğerleri. «Exploring the limits of transfer learning with a unified text-to-text transformer.» The Journal of Machine Learning Research, 2021: 21(1), 5485-5551.
  • Rose, K. «eterministic annealing for clustering, compression, classification, regression, and related optimization problems.» Proceedings of the IEEE. 1998. 86(11), 2210-2239.
  • Sáez, J. A., J. Luengo, J. Stefanowski, ve F. Herrera. «SMOTE–IPF: Addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering.» Information Sciences, 2015: 184-203.
  • Safaya, A., E. Kurtuluş, A. Göktoğan, ve D. Yuret. «Mukayese: Turkish NLP strikes back.» arXiv preprint. 2022. https://arxiv.org/abs/2203.01215 (erişildi: 2023).
  • Sap, M., ve diğerleri. «ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning.» In Proceedings of the Thirty-Third AAAI Conference on Artificial Inteligence. 2019.
  • Seide, F., H. Fu, J. Droppo, G. Li, ve D. Yu. «1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns.» In Fifteenth Annual Conference of the International Speech Communication Association. 2014.
  • Sermanet, P., S. Chintala, ve Y. LeCun. «Convolutional neural networks applied to house numbers digit classification.» In Pattern Recognition (ICPR). IEEE, 2012. 3288-3291.
  • Shah, V., X. Wu, ve S. Sanghavi. «Choosing the sample with lowest loss makes sgd robust.» In International Conference on Artificial Intelligence and Statistics. 2020. 2120-2130.
  • Simard, P., B. Victorri, Y. LeCun, ve J. Denker. «Tangent prop-a formalism for specifying selected invariances in an adaptive network.» In Advances in neural information processing systems. 1992. 895-903.
  • Sutskever, I., J. Martens, G. Dahl, ve G. Hinton. «On the importance of initialization and momentum in deep learning.» In International conference on machine learning. 2013. 1139-1147.
  • Szegedy, C., ve diğerleri. «Intriguing properties of neural networks.» International Conference on Learning Representations. 2014.
  • Tandon, N., G. de Melo, ve G. Weikum. «WebChild 2.0 : Fine-Grained Commonsense Knowledge Distillation.» In Proceedings of ACL . Association for Computational Linguistics, 2017. 115–120.
  • Tang, J., X. Wang, H. Gao, X. Hu, ve H. Liu. «Enriching short text representation in microblog for clustering.» Frontiers of Computer Science in China, 2012: 6(1), 88-101.
  • Tieleman, T. and Hinton, G. RMSProp. Ders Notu, COURSERA: , 2012.
  • Wang, A., A. Singh, J. Michael, F. Hill, O. Levy, ve S. R. & Bowman. «Glue: A multi-task benchmark and analysis platform for natural language understanding.» arXiv preprint. 2018. https://arxiv.org/abs/1804.07461 (erişildi: 2023).
  • Wang, A., ve diğerleri. «Superglue: A stickier benchmark for general-purpose language understanding systems.» arXiv preprint. 2019. https://arxiv.org/abs/1905.00537 (erişildi: 2023).
  • Wu, J., W. Huang, J. Huang, ve T. & Zhang. «Error compensated quantized SGD and its applications to large-scale distributed optimization.» arXiv preprint. 2018. https://arxiv.org/abs/1806.08054 (erişildi: 2023).
  • Wu, X., E. Dyer, ve B. Neyshabur. «When do curricula work?» International Conference on Learning Representations. 2021.
  • Xu, W., C. Callison-Burch, ve B. & Dolan. «Task 1: Paraphrase and semantic similarity in Twitter.» In Proceedings of the 9th international workshop on semantic evaluation. 2015. 1-11.
  • Yao, Q., H. Yang, B. Han, G. Niu, ve J. Kwok. «Searching to exploit memorization effect in learning with noisy labels.» In International Conference on Machine Learning. PMLR, 2020. 10789– 10798.
  • Yazan, E., ve M. F. Talu. «Comparison of the stochastic gradient descent based optimization techniques.» In Artificial Intelligence and Data Processing Symposium (IDAP). IEEE, 2017. 1-5.
  • Yeğin, M. N., Kurttekin, Ö., Bahşi, S. K., & Amasyali, M. F. « Training with growing sets: A comparative study.» Expert Systems, 2022: 39(7), e12961.
  • Yılmaz, B., M. F. Amasyalı, M. Balcılar, E. Uslu, ve S. Yavuz. «Impact of artificial dataset enlargement on performance of deformable part models.» In Signal Processing and Communication Application Conference (SIU). IEEE, 2016. 193-196.
  • Yuille, A. L., ve A. Rangarajan. «The concave-convex procedure.» Neural computation, 2003: 15(4), 915-936.
  • Zhang, H., M. Cisse, Y. N. Dauphin, ve D. & Lopez-Paz. «mixup: Beyond empirical risk minimization.» International Conference on Learning Representations. 2018.
  • Zhang, H., Y. Chi, ve Y. Liang. «Median-truncated nonconvex approach for phase retrieval with outliers.» IEEE Transactions on information Theory, 2018: 64(11), 7287-7310.
  • Zhang, J., Y. Zhao, M. Saleh, ve P. Liu. «Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.» In International Conference on Machine Learning. PMLR, 2020. 11328-11339.
  • Zhang, X., J. Zhao, ve Y. LeCun. «Character-level convolutional networks for text classification.» In Advances in neural information processing systems. 2015. 649-657.
  • Zinkevich, M. «Online convex programming and generalized infinitesimal gradient ascent.» In Proceedings of the twentieth international conference on machine learning. 2003. 928– 936.
APA Amasyali F, ALİYEV R, ÜNAL B, KOPARAN I, TANYEL T, aktaş a, ALKURDİ B, GÜNDOĞ B, DEDEOĞLU E, BACAK T, CEBECİ Ö, KESGİN H, UÇAR T, KOÇ İ, ŞAHİN F (2023). Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. , 0 - 127. 120E100
Chicago Amasyali Fatih,ALİYEV RAMEŞ,ÜNAL BAHATTİN CİHAN,KOPARAN IŞIL BERFİN,TANYEL TOYGAR,aktaş abdulsamet,ALKURDİ BESHER,GÜNDOĞ BUĞRA HAMZA,DEDEOĞLU ENES,BACAK TALHA,CEBECİ ÖMER FARUK,KESGİN HİMMET TOPRAK,UÇAR TOLGA RECEP,KOÇ İPEK,ŞAHİN Feyza Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. (2023): 0 - 127. 120E100
MLA Amasyali Fatih,ALİYEV RAMEŞ,ÜNAL BAHATTİN CİHAN,KOPARAN IŞIL BERFİN,TANYEL TOYGAR,aktaş abdulsamet,ALKURDİ BESHER,GÜNDOĞ BUĞRA HAMZA,DEDEOĞLU ENES,BACAK TALHA,CEBECİ ÖMER FARUK,KESGİN HİMMET TOPRAK,UÇAR TOLGA RECEP,KOÇ İPEK,ŞAHİN Feyza Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. , 2023, ss.0 - 127. 120E100
AMA Amasyali F,ALİYEV R,ÜNAL B,KOPARAN I,TANYEL T,aktaş a,ALKURDİ B,GÜNDOĞ B,DEDEOĞLU E,BACAK T,CEBECİ Ö,KESGİN H,UÇAR T,KOÇ İ,ŞAHİN F Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. . 2023; 0 - 127. 120E100
Vancouver Amasyali F,ALİYEV R,ÜNAL B,KOPARAN I,TANYEL T,aktaş a,ALKURDİ B,GÜNDOĞ B,DEDEOĞLU E,BACAK T,CEBECİ Ö,KESGİN H,UÇAR T,KOÇ İ,ŞAHİN F Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. . 2023; 0 - 127. 120E100
IEEE Amasyali F,ALİYEV R,ÜNAL B,KOPARAN I,TANYEL T,aktaş a,ALKURDİ B,GÜNDOĞ B,DEDEOĞLU E,BACAK T,CEBECİ Ö,KESGİN H,UÇAR T,KOÇ İ,ŞAHİN F "Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu." , ss.0 - 127, 2023. 120E100
ISNAD Amasyali, Fatih vd. "Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu". (2023), 0-127. https://doi.org/120E100
APA Amasyali F, ALİYEV R, ÜNAL B, KOPARAN I, TANYEL T, aktaş a, ALKURDİ B, GÜNDOĞ B, DEDEOĞLU E, BACAK T, CEBECİ Ö, KESGİN H, UÇAR T, KOÇ İ, ŞAHİN F (2023). Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. , 0 - 127. 120E100
Chicago Amasyali Fatih,ALİYEV RAMEŞ,ÜNAL BAHATTİN CİHAN,KOPARAN IŞIL BERFİN,TANYEL TOYGAR,aktaş abdulsamet,ALKURDİ BESHER,GÜNDOĞ BUĞRA HAMZA,DEDEOĞLU ENES,BACAK TALHA,CEBECİ ÖMER FARUK,KESGİN HİMMET TOPRAK,UÇAR TOLGA RECEP,KOÇ İPEK,ŞAHİN Feyza Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. (2023): 0 - 127. 120E100
MLA Amasyali Fatih,ALİYEV RAMEŞ,ÜNAL BAHATTİN CİHAN,KOPARAN IŞIL BERFİN,TANYEL TOYGAR,aktaş abdulsamet,ALKURDİ BESHER,GÜNDOĞ BUĞRA HAMZA,DEDEOĞLU ENES,BACAK TALHA,CEBECİ ÖMER FARUK,KESGİN HİMMET TOPRAK,UÇAR TOLGA RECEP,KOÇ İPEK,ŞAHİN Feyza Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. , 2023, ss.0 - 127. 120E100
AMA Amasyali F,ALİYEV R,ÜNAL B,KOPARAN I,TANYEL T,aktaş a,ALKURDİ B,GÜNDOĞ B,DEDEOĞLU E,BACAK T,CEBECİ Ö,KESGİN H,UÇAR T,KOÇ İ,ŞAHİN F Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. . 2023; 0 - 127. 120E100
Vancouver Amasyali F,ALİYEV R,ÜNAL B,KOPARAN I,TANYEL T,aktaş a,ALKURDİ B,GÜNDOĞ B,DEDEOĞLU E,BACAK T,CEBECİ Ö,KESGİN H,UÇAR T,KOÇ İ,ŞAHİN F Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu. . 2023; 0 - 127. 120E100
IEEE Amasyali F,ALİYEV R,ÜNAL B,KOPARAN I,TANYEL T,aktaş a,ALKURDİ B,GÜNDOĞ B,DEDEOĞLU E,BACAK T,CEBECİ Ö,KESGİN H,UÇAR T,KOÇ İ,ŞAHİN F "Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu." , ss.0 - 127, 2023. 120E100
ISNAD Amasyali, Fatih vd. "Yapay Öğrenmede Eğitim Örneklerini Sıralama Optimizasyonu". (2023), 0-127. https://doi.org/120E100