Bibliography
- [1]
-
M. Alaradi and S. Hilal, “Tree-based methods for loan approval,” in 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI), IEEE, 2020, pp. 1–6.
- [2]
-
P Rajesh, M. Alam, M. Tahernezhadi, C Vikram, and P. Phaneendra, “Real time data science decision tree approach to approve bank loan from lawyer’s perspective,” in 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), IEEE, 2020, pp. 921–929.
- [3]
-
C. Rodríguez-Pardo et al., “Decision tree learning to predict overweight/obesity based on body mass index and gene polymporphisms,” Gene, vol. 699, pp. 88–93, 2019.
- [4]
-
A. T. Azar and S. M. El-Metwally, “Decision tree classifiers for automated medical diagnosis,” Neural Computing and Applications, vol. 23, no. 7, pp. 2387–2403, 2013.
- [5]
-
J. Mesarić and D. Sebalj, “Decision trees for predicting the academic success of students,” Croatian Operational Research Review, vol. 7, no. 2, pp. 367–388, 2016.
- [6]
-
S. A. Kumar et al., “Efficiency of decision trees in predicting student’s academic performance,” 2011.
- [7]
-
P. K. Dalvi, S. K. Khandge, A. Deomore, A. Bankar, and V. Kanade, “Analysis of customer churn prediction in telecom industry using decision trees and logistic regression,” in 2016 symposium on colossal data analysis and networking (CDAN), IEEE, 2016, pp. 1–4.
- [8]
-
P. Save, P. Tiwarekar, K. N. Jain, and N. Mahyavanshi, “A novel idea for credit card fraud detection using decision tree,” International Journal of Computer Applications, vol. 161, no. 13, 2017.
- [9]
-
Y. Sahin, S. Bulkan, and E. Duman, “A cost-sensitive decision tree approach for fraud detection,” Expert Systems with Applications, vol. 40, no. 15, pp. 5916–5923, 2013.
- [10]
-
W. B. Millard, “The wisdom of crowds, the madness of crowds: Rethinking peer review in the web era,” Annals of Emergency Medicine, vol. 57, no. 1, A13–A20, 2011.
- [11]
-
D Heath, S Kasif, and S Salzberg, “K-dt: A multi-tree learning method,” in Proc. of the Second Int. Workshop on Multistrategy Learning, 1993, pp. 138–149.
- [12]
-
T. K. Ho, “Random decision forests,” in Proceedings of 3rd international conference on document analysis and recognition, IEEE, vol. 1, 1995, pp. 278–282.
- [13]
-
T. K. Ho, “The random subspace method for constructing decision forests,” IEEE transactions on pattern analysis and machine intelligence, vol. 20, no. 8, pp. 832–844, 1998.
- [14]
-
Z.-H. Zhou, Ensemble methods: foundations and algorithms. CRC press, 2025.
- [15]
-
P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” Machine learning, vol. 63, no. 1, pp. 3–42, 2006.
- [16]
-
R. Bellman, “Dynamic programming princeton university press,” Princeton, NJ, pp. 4–9, 1957.
- [17]
-
A. J. Izenman, “Introduction to manifold learning,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 4, no. 5, pp. 439–446, 2012.
- [18]
-
R. Nayak, U. C. Pati, and S. K. Das, “A comprehensive review on deep learning-based methods for video anomaly detection,” Image and Vision Computing, vol. 106, p. 104 078, 2021.
- [19]
-
J. Matou sek, “On variants of the johnson–lindenstrauss lemma,” Random Structures & Algorithms, vol. 33, no. 2, pp. 142–156, 2008.
- [20]
-
W. Zhiqiang and L. Jun, “A review of object detection based on convolutional neural network,” in 2017 36th Chinese control conference (CCC), IEEE, 2017, pp. 11 104–11 109.
- [21]
-
T. Kansal, S. Bahuguna, V. Singh, and T. Choudhury, “Customer segmentation using k-means clustering,” in 2018 international conference on computational techniques, electronics and mechanical systems (CTEMS), IEEE, 2018, pp. 135–139.
- [22]
-
R. Kumari, M. Singh, R Jha, N. Singh, et al., “Anomaly detection in network traffic using k-mean clustering,” in 2016 3rd international conference on recent advances in information technology (RAIT), IEEE, 2016, pp. 387–393.
- [23]
-
E. Bair, “Semi-supervised clustering methods,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 5, no. 5, pp. 349–361, 2013.
- [24]
-
O. E. Zamir, Clustering web documents: a phrase-based method for grouping search engine results. University of Washington, 1999.
- [25]
-
S. A. Burney and H. Tariq, “K-means cluster analysis for image segmentation,” International Journal of Computer Applications, vol. 96, no. 4, 2014.
- [26]
-
S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on information theory, vol. 28, no. 2, pp. 129–137, 1982.
- [27]
-
E. W. Forgy, “Cluster analysis of multivariate data: Efficiency versus interpretability of classifications,” biometrics, vol. 21, pp. 768–769, 1965.
- [28]
-
D. Arthur and S. Vassilvitskii, “K-means++: The advantages of careful seeding,” Stanford, Tech. Rep., 2006.
- [29]
-
C. Elkan, “Using the triangle inequality to accelerate k-means,” in Proceedings of the 20th international conference on Machine Learning (ICML-03), 2003, pp. 147–153.
- [30]
-
D. Sculley, “Web-scale k-means clustering,” in Proceedings of the 19th international conference on World wide web, 2010, pp. 1177–1178.
- [31]
-
F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation forest,” in 2008 eighth ieee international conference on data mining, IEEE, 2008, pp. 413–422.
- [32]
-
BBC, How a kingfisher helped reshape japan’s bullet train, 2019. [Online]. Available: https://www.bbc.com/news/av/science-environment-47673287 .
- [33]
-
J. F. Vincent, O. A. Bogatyreva, N. R. Bogatyrev, A. Bowyer, and A.-K. Pahl, “Biomimetics: Its practice and theory,” Journal of the Royal Society Interface, vol. 3, no. 9, pp. 471–482, 2006.
- [34]
-
S Agatonovic-Kustrin and R. Beresford, “Basic concepts of artificial neural network (ann) modeling and its application in pharmaceutical research,” Journal of pharmaceutical and biomedical analysis, vol. 22, no. 5, pp. 717–727, 2000.
- [35]
-
S. D. Holcomb, W. K. Porter, S. V. Ault, G. Mao, and J. Wang, “Overview on deepmind and its alphago zero ai,” in Proceedings of the 2018 international conference on big data and education, 2018, pp. 67–71.
- [36]
-
W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, vol. 5, pp. 115–133, 1943.
- [37]
-
J. Howe, “Artificial intelligence at edinburgh university: A perspective,” Archived from the original on, vol. 17, 2007.
- [38]
-
I. J. Goodfellow, O. Vinyals, and A. M. Saxe, “Qualitatively characterizing neural network optimization problems,” arXiv preprint arXiv:1412.6544, 2014.
- [39]
-
A. Sebé-Pedrós, “Stepwise emergence of the neuronal gene expression program in early animal evolution,” 2023.
- [40]
-
D. G. Barrett, A. S. Morcos, and J. H. Macke, “Analyzing biological and artificial neural networks: Challenges with opportunities for synergy?” Current opinion in neurobiology, vol. 55, pp. 55–64, 2019.
- [41]
-
H.-D. Block, “The perceptron: A model for brain functioning. i,” Reviews of Modern Physics, vol. 34, no. 1, p. 123, 1962.
- [42]
-
S. Sharma, S. Sharma, and A. Athaiya, “Activation functions in neural networks,” Towards Data Sci, vol. 6, no. 12, pp. 310–316, 2017.
- [43]
-
R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of eugenics, vol. 7, no. 2, pp. 179–188, 1936.
- [44]
-
H. Sompolinsky, “The theory of neural networks: The hebb rule and beyond,” in Heidelberg Colloquium on Glassy Dynamics: Proceedings of a Colloquium on Spin Glasses, Optimization and Neural Networks Held at the University of Heidelberg June 9–13, 1986, Springer, 2006, pp. 485–527.
- [45]
-
D. O. Hebb, The organization of behavior: A neuropsychological theory. Psychology press, 2005.
- [46]
-
W. Gerstner and W. M. Kistler, “Mathematical formulations of hebbian learning,” Biological cybernetics, vol. 87, no. 5, pp. 404–415, 2002.
- [47]
-
M.-C. Popescu, V. E. Balas, L. Perescu-Popescu, and N. Mastorakis, “Multilayer perceptron and neural networks,” WSEAS Transactions on Circuits and Systems, vol. 8, no. 7, pp. 579–588, 2009.
- [48]
-
G. Bebis and M. Georgiopoulos, “Feed-forward neural networks,” Ieee Potentials, vol. 13, no. 4, pp. 27–31, 1994.
- [49]
-
W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, and K.-R. Müller, “Explaining deep neural networks and beyond: A review of methods and applications,” Proceedings of the IEEE, vol. 109, no. 3, pp. 247–278, 2021.
- [50]
-
S. Linnainmaa, “Algoritmin kumulatiivinen pyo¨ristysvirhe yksitta¨isten pyo¨ristysvirheiden taylor-kehitelma¨na¨,” Available in Finnish at https://people.idsia.ch/~juergen/linnainmaa1970thesis.pdf , Master’s thesis, University of Helsinki, 1970.
- [51]
-
R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in Neural networks for perception, Elsevier, 1992, pp. 65–93.
- [52]
-
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, pp. 533–536, 1986.
- [53]
-
H. Knut, “Neural networks p. 7,” University of Applied Sciences Northwestern Switzerland, 2018.
- [54]
-
D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.
- [55]
-
G. Hinton et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.
- [56]
-
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
- [57]
-
Á. Zarándy, C. Rekeczky, P. Szolgay, and L. O. Chua, “Overview of cnn research: 25 years history and the current trends,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, 2015, pp. 401–404.
- [58]
-
Z. Huang and W. Zhao, “Combination of elmo representation and cnn approaches to enhance service discovery,” IEEE Access, vol. 8, pp. 130 782–130 796, 2020.
- [59]
-
Q. Li, X. Li, B. Lee, and J. Kim, “A hybrid cnn-based review helpfulness filtering model for improving e-commerce recommendation service,” Applied Sciences, vol. 11, no. 18, p. 8613, 2021.
- [60]
-
Z. Ouyang, J. Niu, Y. Liu, and M. Guizani, “Deep cnn-based real-time traffic light detector for self-driving vehicles,” IEEE transactions on Mobile Computing, vol. 19, no. 2, pp. 300–313, 2019.
- [61]
-
B. T. Nugraha, S.-F. Su, et al., “Towards self-driving car using convolutional neural network and road lane detector,” in 2017 2nd international conference on automation, cognitive science, optics, micro electro-mechanical system, and information technology (ICACOMIT), IEEE, 2017, pp. 65–69.
- [62]
-
H. Ye, Z. Wu, R.-W. Zhao, X. Wang, Y.-G. Jiang, and X. Xue, “Evaluating two-stream cnn for video classification,” in Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, 2015, pp. 435–442.
- [63]
-
D. H. Hubel and T. N. Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex,” The Journal of physiology, vol. 160, no. 1, p. 106, 1962.
- [64]
-
K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological cybernetics, vol. 36, no. 4, pp. 193–202, 1980.
- [65]
-
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
- [66]
-
V. Alto, Data augmentation in deep learning, 2020. [Online]. Available: https://medium.com/analytics-vidhya/data-augmentation-in-deep-learning-3d7a539f7a28 .
- [67]
-
G. Deco and E. T. Rolls, “Neurodynamics of biased competition and cooperation for attention: A model with spiking neurons,” Journal of neurophysiology, vol. 94, no. 1, pp. 295–313, 2005.
- [68]
-
M. Zeiler, “Visualizing and understanding convolutional networks,” in European conference on computer vision/arXiv, vol. 1311, 2014.
- [69]
-
C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
- [70]
-
X. Xia, C. Xu, and B. Nan, “Inception-v3 for flower classification,” in 2017 2nd international conference on image, vision and computing (ICIVC), IEEE, 2017, pp. 783–787.
- [71]
-
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, 2017.
- [72]
-
A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2014, pp. 1725–1732.
- [73]
-
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- [74]
-
F. Liu, X. Ren, Z. Zhang, X. Sun, and Y. Zou, “Rethinking skip connection with layer normalization in transformers and resnets,” arXiv preprint arXiv:2105.07205, 2021.
- [75]
-
X. Han et al., “Pre-trained models: Past, present and future,” AI Open, vol. 2, pp. 225–250, 2021.