Yıl: 2021 Cilt: 8 Sayı: 2 Sayfa Aralığı: 1029 - 1043 Metin Dili: İngilizce DOI: 10.3906/elk-2005-119 İndeks Tarihi: 07-06-2022

Neural relation extraction: a review

Öz:
Neural relation extraction discovers semantic relations between entities from unstructured text using deep learning methods. In this study, we make a clear categorization of the existing relation extraction methods in terms of data expressiveness and data supervision, and present a comprehensive and comparative review. We describe the evaluation methodologies and the datasets used for model assessment. We explicitly state the common challenges in relation extraction task and point out the potential of the pretrained models to solve them. Accordingly, we investigate additional research directions and improvement ideas in this field.
Anahtar Kelime:

Evaluation of Photon Percent Deep Dose and Beam Profile Parameters for a period of ten years in the Elekta Synergy Platform Linear Accelarator Device

Öz:
Objective: Quality control of the accuracy of radiation therapy requires a series of dosimetric and geometric tests during and before the treatment period. In this study, it is aimed to examine the Percent Depth Dose (PDD) and Beam Profile (BP) parameters obtained during the parts replacement due to breakdown and / or during the annual Linear Accelarator (LINAC) quality control processes for a period of ten years, and to evaluate the deviations between the measurement parameters transferred to Treatment Planning System. Material and Methods: In this study, the PDD and BP parameters obtained during the LINAC quality control processes for Elekta brand, Synergy Platform model Linear accelerator, located in the Radiation Oncology Department of the University of Health Sciences, XXX Training and Research Hospital, between November 2011 and September 2020 were included. Results: For the 6 MV, the highest difference was found to be 2.1 mm for the R50 in March 2013. For the 18 MV, the largest difference was 2.2 mm for the R90 in February 2020. The biggest difference for the Flatness parameter obtained from the BP curve was found to be 2.0% in the AB direction in December 2014 for 6 MV. Conclusion: In situations where the energy and profile parameters ofLINAC devices are likely to change, water phantom measurements should be made and re-evaluated according to the values transferred to the treatment planning system.
Anahtar Kelime:

Elekta Synergy Platform Lineer Hızlandırıcı Cihazında On yıllık periyod için Foton Yüzde Derin Doz ve Işın Profil Parametrelerinin Değerlendirilmesi

Öz:
Amaç: Radyasyon tedavisinin doğruluğunun kalite kontrolü, tedavi öncesi ve devam ettiği tüm süre boyunca dozimetrik ve geometrik bir dizi test gerektirmektedir. Çalışmada, on yıllık periyod için, arıza sebebiyle parça değişimi ve/veya yıllık Lineer Hızlandırıcı Cihazı kalite kontrol işlemleri boyunca elde edilen Yüzde Derin Doz (YDD) ve Işın Profil (IP) parametrelerini incelemek ve Tedavi Planlama Sistemine (TPS) aktarılan ölçüm parametreleri ile aralarındaki farkları değerlendirmek amaçlanmıştır. Gereç ve Yöntemler: Çalışmaya, 2011 Kasım – 2020 Eylül tarihleri arasında, Sağlık Bilimleri Üniversitesi, XXX Eğitim ve Araştırma Hastanesi, Radyasyon Onkolojisi bölümünde bulunan Elekta marka, Synergy Platform model Lineer hızlandırıcı’ da, cihaz kalite kontrol işlemleri sırasında elde edilen YDD ve IP parametreleri dahil edilmiştir. Bulgular: 6 MV için, en yüksek fark, R50 için 2013 Mart ayında 2.1 mm olarak bulunmuştur. 18 MV için, en yüksek fark R90 için 2020 Şubat ayında 2.2 mm’dir. IP eğrisinden elde edilen düzgünlük parametresi için en büyük fark, 6 MV için AB yönde 2014 Aralık ayında %2.0 olarak bulunmuştur. Sonuç: Lineer Hızlandırıcı cihazlarının özellikle enerji ve profil parametrelerin değişme olasılığı içeren durumlarda, su fantomu ölçümlerinin yapılması ve tedavi planlama sistemine aktarılan değerlere göre tekrar değerlendirilmesi gerekmektedir.
Anahtar Kelime: Lineer Hızlandırıcı Yüzde Derin Doz Işın Profil

Belge Türü: Makale Makale Türü: Derleme Erişim Türü: Erişime Açık
  • 1. Biltekin F, Yazıcı G, Cengiz M, Doğan A, Ünlü B, Yeğiner M, Özyiğit G. Dosimetric and Mechanical Stability of CyberKnife Robotic Radiosurgery Unit: 5 Years’ Clinical Experience. Turk J Oncol 2016;31(2):45-50.
  • 1] Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J. Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on Management of data (SIGMOD ’08); Vancouver, Canada; 2008. pp. 1247-1250.
  • 2. Hossaina M, Rhoadesaa J. On beam quality and flatness of radiotherapy megavoltage photon beams. Australas Phys Eng Sci Med. 2016; 39(1): 135–145.
  • [2] Auer S, Bizer C, Kobilarov G, Lehmann J, Cyganiak R et al. Dbpedia: a nucleus for a web of open data. In: The Semantic Web; Berlin, Heidelberg; 2007. pp. 722-735.
  • 3. Şahiner T, Kurt M, Eker S, Gül SS. Lineer hızlandırıcı radyoterapi cihazının yapısında bulunan monitör iyon odasının kalite kontrol testlerinin uygunluğunun belirlenmesi. FNG & Bilim Tıp Dergisi 2015;1(3):115-123.
  • [3] Pawar S, Palshikar GK, Bhattacharyya P. Relation extraction: a survey. arXiv 2017; 1712.05191.
  • 4. Thwaites DI, Centre EC, Hospital WG, Kingdom U, Mijnheer BJ, Mills JA. Quality Assurance of External Beam Radiotherapy. Chapter 12 Quality Assurance. 2003; 407-450.
  • [4] Kumar S. A survey of deep learning methods for relation extraction. arXiv 2017; 1705.03645.
  • 5. Mackie TR. Linac Based Radiosurgery and Stereotactic Radiotherapy. Depts. Of Medical Physics, Human Oncology, and Engineering Physics University of Wisconsin seminar 2015; 35-38.
  • [5] Smirnova A, Cudré-Mauroux P. Relation extraction using distant supervision: a survey. ACM Computing Surveys (CSUR) 2018; 51 (5): 1-35. doi: 10.1145/3241741
  • 6. Uddin T. Quality control of modern linear accelerator dose stability long and short-term. Conf.Proc.C 1205201.2012; 2660-2662.
  • [6] Han X, Gao T, Yao Y, Ye D, Liu Z et al. OpenNRE: an open and extensible toolkit for neural relation extraction. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations; Hong Kong, China; 2019. pp. 169-174.
  • 7. Islam MM, Khan KA, Bhuiyan MMH. Measurement of Percentage Depth Dose of a Linear Accelerator for 6 MV and 10 MV Photon Energies. Nuclear Science and Applications. 2015; 24: 1&2
  • [7] Riedel S, Yao L, McCallum A. Modeling relations and their mentions without labeled text. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Berlin, Heidelberg; 2010. pp. 148-163.
  • 8. Packard C. Calculation of percentage depth dose. Radiology Society of North America. 82nd scientific assembly and annual meeting, Chicago, Illiois. 2009; 130(5): 44-48.
  • [8] Hoffmann R, Zhang C, Ling X, Zettlemoyer L, Weld DS. Knowledge-based weak supervision for information extraction of overlapping relations. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies; Portland, Oregon, USA; 2011. pp. 541–550
  • 9. Klein E, Hanley J, Bayouth J, Yin F, Simon W, Dresser S, Serago C, Aguirre F, Ma L, Arjomandy B, Liu C, Sandin C, Holmes T. Task group 142 report: quality assurance of medical accelerators. Med Phys. 2009; 36:4197–4212.
  • [9] Lin Y, Shen S, Liu Z, Luan H, Sun M. Neural relation extraction with selective attention over instances. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Berlin, Germany; 2016. pp. 2124–2133.
  • 10. Kerns JR, Followill D, Lowenstein J, Molineu A, Alvarez P,Taylor PA, Kry SF. Reference dosimetry data and modeling challenges for Elekta accelerators based on IROC-Houston Site Visit Data. Med Phys. 2018; 45(5): 2337–2344.
  • [10] Yao Y, Ye D, Li P, Han X, Lin Y et al. DocRED: A large-scale document-level relation extraction dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; Florence, Italy; 2019. pp. 764-777.
  • 11. Smith K, Balter P, Duhon J, White GA, Vassy DL, Miller RA, Serago CF, FairobenT LA. AAPM Medical Physics Practice Guideline 8.a.: Linear accelerator performance tests. J Appl Clin Med Phys. 2017; 18(4):23–39.
  • [11] Quirk C, Poon H. Distant supervision for relation extraction beyond the sentence boundary. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers; Valencia, Spain; 2017. pp. 1171-1182.
  • 12. Kutcher GJ, Coia L, Gillin M, Hanson S, Leibel S, Morton RJ, Palta J, Purdy J, Reinstein LE, Svensson GK, Weller M, Wingfield L. Comprehensive QA for radiation oncology: Report of AAPM radiation therapy committee task group 40. Med Phys. 1994; 21(4):381–618.
  • [12] Verga P, Strubell E, McCallum A. Simultaneously self-attending to all mentions for full- abstract biological relation extraction. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technologies, Volume 1 (Long Papers); New Orleans, LA, USA; 2018. pp. 872–884.
  • 13. International Atomic Energy Agency Absorbe dosebdetermination in external beam radiotherapy: an internationalbcode of practice for dosimetry based on standards of absorbe dose to water. Tecnical Reports Series No: 398, Vienna, Austria. 2000.
  • [13] Socher R, Huval B, Manning CD, Ng AY. Semantic compositionality through recursive matrix-vector spaces. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning; Jeju Island, Korea; 2012. pp. 1201-1211.
  • 14. Day MJ and Aird EG. Central Axis Depth Dose Data for Use in Radiotherapy. BJR, Sup 25, 1996; 90s.
  • [14] Zeng D, Liu K, Lai S, Zhou G, Zhao J. Relation classification via convolutional deep neural network. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers; Dublin, Ireland; 2014. pp. 2335-2344.
  • 15. IEC (International Electrotechnical Commission), “Medical electrical equipment-Medical electron accelarators-Functional performance characteristics,” Standard IEC-60976, IEC, Geneva. 2007.
  • [15] Santos CN, Xiang B, Zhou B. Classifying relations by ranking with convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers); Beijing, China; 2015. pp. 626-634.
  • 16. Peng JL, Kahler D, Li JG, Amdur RJ, Vanek KN, Liu C. Feasibility study of performing IGRT system daily QA using a commercial QA device. J Appl Clin Med Phys. 2011;12 (3): 3535.
  • [16] Zhang Y, Zhong V, Chen D, Angeli G, Manning CD. Position-aware attention and supervised data improve slot filling. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing; Copenhagen, Denmark; 2017. pp. 35-45.
  • [17] Zhang D, Wang D. Relation classification via recurrent neural network. arXiv 2015; 1508.01006.
  • [18] Xu Y, Mou L, Li G, Chen Y, Peng H et al. Classifying relations via long short term memory networks along shortest dependency paths. In: Proceedings of the 2015 conference on empirical methods in natural language processing; Lisbon, Portugal; 2015. pp. 1785-1794.
  • [19] Zhang S, Zheng D, Hu X, Yang M. Bidirectional long short-term memory networks for relation classification. In: Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation; Shanghai, China; 2015. pp. 73–78.
  • [20] Zhou P, Shi W, Tian J, Qi Z, Li B et al. Attention-based bidirectional long short-term memory networks for relation classification. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers); Berlin, Germany; 2016. pp. 207–212.
  • [21] Wei Z, Su J, Wang Y, Tian Y, Chang Y. A novel hierarchical binary tagging framework for joint extraction of entities and relations. arXiv 2019; 1909.03227.
  • [22] Devlin J, Chang MW, Lee K, Toutanova K. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv 2018; 1810.04805.
  • [23] Dai Z, Yang Z, Yang Y, Carbonell JG, Le Q et al. Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; Florence, Italy; 2019. pp. 2978-2988.
  • [24] Radford A, Wu J, Child R, Luan D, Amodei D et al. Language models are unsupervised multitask learners. San Francisco, CA, USA: OpenAI Blog, 2019
  • 25] Wu S, He Y. Enriching pre-trained language model with entity information for relation classification. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management; Beijing, China; 2019. pp. 2361-2364.
  • [26] Soares LB, FitzGerald N, Ling J, Kwiatkowski T. Matching the blanks: distributional similarity for relation learning. arXiv 2019; 1906.03158. [27] Zhao Y, Wan H, Gao J, Lin Y. Improving relation classification by entity pair graph. In: Asian Conference on Machine Learning; Nagoya, Japan; 2019. pp. 1156-1171.
  • [28] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L et al. Attention is all you need. In: Advances in neural information processing systems; Long Beach, CA, USA; 2017. pp. 5998-6008.
  • [29] Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J et al. Language models are few-shot learners. arXiv 2020; 2005.14165.
  • [30] Nguyen TH, Grishman R. Relation Extraction: Perspective from Convolutional Neural Networks. In: 1st Workshop on Vector Space Modeling for Natural Language Processing; Denver, CO, USA; 2015. pp. 39-48.
  • [31] Wang L, Cao Z, De Melo G, Liu Z. Relation classification via multi-level attention cnns. In: 54th annual meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Berlin, Germany; 2016. pp. 1298-1307.
  • [32] Cai R, Zhang X, Wang H. Bidirectional recurrent convolutional neural network for relation classification. In: 54th annual meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Berlin, Germany; 2016. pp. 756-765.
  • [33] Mintz M, Bills S, Snow R, Jurafsky D. Distant supervision for relation extraction without labeled data. In: Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2; Suntec, Singapore; 2009. pp. 1003-1011.
  • [34] Dietterich TG, Lathrop RH, Lozano-Pérez T. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence 1997; 89 (1-2): 31-71. doi: 10.1016/S0004-3702(96)00034-3
  • [35] Surdeanu M, Tibshirani J, Nallapati R, Manning CD. Multi-instance multi-label learning for relation extraction. In: Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning; Jeju Island, Korea; 2012. pp. 455-465.
  • [36] Zeng D, Liu K, Chen Y, Zhao J. Distant supervision for relation extraction via piecewise convolutional neural networks. In: Empirical Methods in Natural Language Processing; Lisbon, Portugal; 2015. pp. 1753-1762.
  • [37] Ji G, Liu K, He S, Zhao J. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In: AAAI Conference on Artificial Intelligence; San Francisco, CA, USA; 2017. pp. 3060-3066.
  • [38] Han X, Yu P, Liu Z, Sun M, Li P. Hierarchical relation extraction with coarse-to-fine grained attention. In: Empirical Methods in Natural Language Processing; Brussels, Belgium; 2018. pp. 2236-2245.
  • [39] Han X, Liu Z, Sun M. Neural knowledge acquisition via mutual attention between knowledge graph and text. In: AAAI Conference on Artificial Intelligence; New Orleans, LA, USA; 2018. pp. 4832-4839.
  • [40] Wang G, Zhang W, Wang R, Zhou Y, Chen X et al. Label-free distant supervision for relation extraction via knowledge graph embedding. In: Empirical Methods in Natural Language Processing; Brussels, Belgium; 2018. pp. 2246-2255.
  • [41] Liu A, Soderland S, Bragg J, Lin CH, Ling X et al. Effective crowd annotation for relation extraction. In: North American Chapter of the Association for Computational Linguistics: Human Language Technologies; San Diego, CA, USA; 2016. pp. 897-906.
  • [42] Ratner AJ, De Sa CM, Wu S, Selsam D, Ré C et al. Data programming: creating large training sets, quickly. In: Advances in Neural Information Processing Systems; Barcelona, Spain; 2016. pp. 3567-3575.
  • [43] Zheng S, Han X, Lin Y, Yu P, Chen L et al. DIAG-NRE: a neural pattern diagnosis framework for distantly supervised neural relation extraction. In: 57th Annual Meeting of the Association for Computational Linguistics; Florence, Italy; 2019. pp. 1419-1429.
  • [44] Lin Y, Liu Z, Sun M. Neural relation extraction with multi-lingual attention. In: 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers); Vancouver, Canada; 2017. pp. 34-43.
  • [45] Wang X, Han X, Lin Y, Liu Z, Sun M. Adversarial multi-lingual neural relation extraction. In: 27th International Conference on Computational Linguistics; Santa Fe, NM, USA; 2018. pp. 1156-1166.
  • [46] Jiang X, Wang Q, Li P, Wang B. Relation extraction with multi-instance multi-label convolutional neural networks. In: COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers; Osaka, Japan; 2016. pp. 1471-1480.
  • [47] Han X, Zhu H, Yu P, Wang Z, Yao Y et al. FewRel: a large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In: Empirical Methods in Natural Language Processing; Brussels, Belgium; 2018. pp. 4803-4809.
  • [48] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems; Long Beach, CA, USA; 2017. pp. 4077-4087.
  • [49] Gao T, Han X, Liu Z, Sun M. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In: AAAI Conference on Artificial Intelligence; Honolulu, HI, USA; 2019. pp. 6407-6414.
  • [50] Ye ZX, Ling ZH. Multi-level matching and aggregation network for few-shot relation classification. arXiv 2019; 1906.06678.
  • [51] Zeng X, Zeng D, He S, Liu K, Zhao J et al. Extracting relational facts by an end-to-end neural model with copy mechanism. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics; Melbourne, Australia; 2018. pp. 506-514.
  • [52] Takanobu R, Zhang T, Liu J, Huang M. A hierarchical framework for relation extraction with reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence; Honolulu, Hawaii, USA; 2019. pp. 7072-7079.
  • [53] Fu TJ, Li PH, Ma WY. Graphrel: modeling text as relational graphs for joint entity and relation extraction. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; Florence, Italy; 2019. pp. 1409-1418.
  • [54] Gao T, Han X, Zhu H, Liu Z, Li P et al. FewRel 2.0: towards more challenging few-shot relation classification. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing; Hong Kong, China; 2019. pp. 6249-6254.
  • [55] Hendrickx I, Kim SN, Kozareva Z, Nakov P, Séaghdha DÓ et al. SemEval-2010 Task 8: multi-way classification of semantic relations between pairs of nominals. In: Proceedings of the 5th International Workshop on Semantic Evaluation 2010; Boulder, CO, USA; 2010. pp. 94-99.
  • [56] Walker C, Strassel S, Medero J, Maeda K. ACE 2005 multilingual training corpus. Linguistic Data Consortium 2006; 57: 45. doi: 10.35111/mwxc-vh88
  • [57] Gardent C, Shimorina A, Narayan S, Perez-Beltrachini L. The WebNLG challenge: generating text from RDF data. In: Proceedings of the 10th International Conference on Natural Language Generation; Santiago de Compostela, Spain; 2017. pp. 124-133.
  • [58] Zhou Q, Yang N, Wei F, Tan C, Bao H et al. Neural question generation from text: a preliminary study. In: National CCF Conference on Natural Language Processing and Chinese Computing; Dalian, China; 2017. pp. 662-671.
  • [59] Xingdi Y, Tong W, Caglar G, Alessandro S, Philip B et al. Machine comprehension by text-to-text neural question generation. In: Proceedings of the 2nd Workshop on Representation Learning for NLP; Vancouver, Canada; 2017. pp. 15-25.
  • [60] Kupiec JM. Method for extracting from a text corpus answers to questions stated in natural language by using linguistic analysis and hypothesis generation. May 21 1996. US Patent 5,519,608.
  • [61] Du X, Shao J, Cardie C. Learning to ask: neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics; Vancouver, Canada; 2017. pp. 1342-1352.
  • 62] Wolfe JH. Automatic question generation from text - an aid to independent study. In: Proceedings of the ACM SIGCSE-SIGCUE Technical Symposium on Computer Science and Education; New York, NY, USA; 1976. pp. 104–112.
  • [63] Pranav R, Jian Z, Konstantin L, Percy L. SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing; Austin, Texas, USA; 2016. pp. 2383-2392.
  • [64] Xiaozhi W, Xu H, Zhiyuan L, Maosong S, Peng L. Adversarial training for weakly supervised event detection. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; Minneapolis, Minnesota, USA; 2019. pp. 998-1008.
  • [65] Xiaozhi W, Ziqi W, Xu H, Zhiyuan L, Juanzi L et al. HMEAE: hierarchical modular event argument extraction. In: HMEAE: Hierarchical Modular Event Argument Extraction; Hong Kong, China; 2019. pp. 5777-5783.
  • [66] Daniel C, Yinfei Y, Sheng-yi K, Nan H, Nicole L et al. Universal sentence encoder for English. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations; Brussels, Belgium; 2018. pp. 169-174.
  • [67] Yinfei Y, Daniel C, Amin A, Mandy G, Jax L et al. Multilingual universal sentence encoder for semantic retrieval. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations; Online; 2020. pp. 87-94.
  • [68] Artetxe M, Schwenk H. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. Transactions of the Association for Computational Linguistics 2019; 7: 597-610. doi: 10.1162/tacl_a_00288
  • [69] Bouayad-Agha N, Casamayor G, Wanner L. Natural language generation in the context of the semantic web. Semantic Web 2014; 5 (6): 5493-513. doi: 10.3233/SW-130125
  • [70] Duma D, Klein E. Generating natural language from linked data: unsupervised template extraction. In: Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013)–Long Papers; Potsdam, Germany; 2013. pp. 83-94.
  • [71] Sun X, Mellish C. Domain independent sentence generation from RDF representations for the Semantic Web. In: Combined Workshop on Language-Enabled Educational Technology and Development and Evaluation of Robust Spoken Dialogue Systems, European Conference on AI; Riva del Garda, Italy; 2006.
  • [72] Wen C, Minghui Z, Rongwen Z, Narges N. KB-NLG: from knowledge base to natural language generation. In: Proceedings of the 2019 Workshop on Widening NLP; Florence, Italy; 2019. pp. 80-82.
  • [73] Zhu Y, Wan J, Zhou Z, Chen L, Qiu L et al. Triple-to-text: converting RDF triples into high-quality natural languages via optimizing an inverse KL divergence. In: SIGIR ’19: The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval; Paris, France; 2019. pp. 455-464.
  • [74] Datar M, Immorlica N, Indyk P, Mirrokni VS. Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the Twentieth Annual Symposium on Computational Geometry; Brooklyn, NY, USA; 2004. pp. 253-262.
  • [75] Aydar M, Ayvaz S. An improved method of locality-sensitive hashing for scalable instance matching. Knowledge and Information Systems 2019; 58 (2): 275-294. doi: 10.1007/s10115-018-1199-5
APA AYDAR M, Inal A, Bozal O, SARPÜN İ, Özbay F (2021). Neural relation extraction: a review. , 1029 - 1043. 10.3906/elk-2005-119
Chicago AYDAR MEHMET,Inal Aysun,Bozal Ozge,SARPÜN İsmail Hakkı,Özbay Furkan Neural relation extraction: a review. (2021): 1029 - 1043. 10.3906/elk-2005-119
MLA AYDAR MEHMET,Inal Aysun,Bozal Ozge,SARPÜN İsmail Hakkı,Özbay Furkan Neural relation extraction: a review. , 2021, ss.1029 - 1043. 10.3906/elk-2005-119
AMA AYDAR M,Inal A,Bozal O,SARPÜN İ,Özbay F Neural relation extraction: a review. . 2021; 1029 - 1043. 10.3906/elk-2005-119
Vancouver AYDAR M,Inal A,Bozal O,SARPÜN İ,Özbay F Neural relation extraction: a review. . 2021; 1029 - 1043. 10.3906/elk-2005-119
IEEE AYDAR M,Inal A,Bozal O,SARPÜN İ,Özbay F "Neural relation extraction: a review." , ss.1029 - 1043, 2021. 10.3906/elk-2005-119
ISNAD AYDAR, MEHMET vd. "Neural relation extraction: a review". (2021), 1029-1043. https://doi.org/10.3906/elk-2005-119
APA AYDAR M, Inal A, Bozal O, SARPÜN İ, Özbay F (2021). Neural relation extraction: a review. Turkish Journal of Electrical Engineering and Computer Sciences, 8(2), 1029 - 1043. 10.3906/elk-2005-119
Chicago AYDAR MEHMET,Inal Aysun,Bozal Ozge,SARPÜN İsmail Hakkı,Özbay Furkan Neural relation extraction: a review. Turkish Journal of Electrical Engineering and Computer Sciences 8, no.2 (2021): 1029 - 1043. 10.3906/elk-2005-119
MLA AYDAR MEHMET,Inal Aysun,Bozal Ozge,SARPÜN İsmail Hakkı,Özbay Furkan Neural relation extraction: a review. Turkish Journal of Electrical Engineering and Computer Sciences, vol.8, no.2, 2021, ss.1029 - 1043. 10.3906/elk-2005-119
AMA AYDAR M,Inal A,Bozal O,SARPÜN İ,Özbay F Neural relation extraction: a review. Turkish Journal of Electrical Engineering and Computer Sciences. 2021; 8(2): 1029 - 1043. 10.3906/elk-2005-119
Vancouver AYDAR M,Inal A,Bozal O,SARPÜN İ,Özbay F Neural relation extraction: a review. Turkish Journal of Electrical Engineering and Computer Sciences. 2021; 8(2): 1029 - 1043. 10.3906/elk-2005-119
IEEE AYDAR M,Inal A,Bozal O,SARPÜN İ,Özbay F "Neural relation extraction: a review." Turkish Journal of Electrical Engineering and Computer Sciences, 8, ss.1029 - 1043, 2021. 10.3906/elk-2005-119
ISNAD AYDAR, MEHMET vd. "Neural relation extraction: a review". Turkish Journal of Electrical Engineering and Computer Sciences 8/2 (2021), 1029-1043. https://doi.org/10.3906/elk-2005-119