Browsing by Author "Haddad, Mohammed"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Offline Arabic handwritten character recognition: from conventional machine learning system to deep learning approaches(2022) Faouci, Soumia; Gaceb, Djamel; Haddad, MohammedResearchers have made great strides in the area of Arabic handwritten character recognition in the last decades especially with the fast development of deep learning algorithms. The characteristics of Arabic manuscript text pose several problems for a recognition system. This paper presents a conventional machine learning system based on the extraction of a set of preselected features and an SVM classifier. In the second part, a simplified convolutional neural network (CNN) model is proposed, which is compared to six other CNN models based on the pre-trained architectures. The suggested methods were tested using three databases: two versions of the OIHACDB dataset and the AIA9K dataset. The experimental results show that the proposed CNN model obtained promising results, as it is able to recognise 94.7%, 98.3%, and 95.6% of the test set of the three databases OIHACDB-28, OIHACDB-40, and AIA9K, respectively.Item Word-Spotting approach using transfer deep learning of a CNN network(IEEE, 2020) Benabdelaziz, Ryma; Gaceb, Djamel; Haddad, MohammedConvolutional Neural Networks (CNNs) are deep learning models that are trained to automatically extract the most discriminating features directly from an input image to be used for visual classification tasks. Recently, CNNs attracted a lot of interest thanks to their effectiveness in many computer vision applications (medical imaging, video surveillance, biometrics, pattern recognition, OCR, etc.). Transfer learning is an optimization method that uses a pretrained network to speed up the training of another related task or application. This helps speed up and improve the training process on a new dataset. In this paper, we propose a new approach of handwritten word retrieval based on deep learning and transfer learning. We compared the performance between two types of extracted features based on transfer learning: from a pre-trained model and a fine-tuned network. Experiments are performed using six different CNN architectures and three similarity measures on the presegmented Bentham dataset of the ICDAR competition. The obtained results demonstrate the effectiveness of our proposed approach compared to existing methods, evaluated in this competition
