Browsing by Author "Cherifi, Dalila"
Now showing 1 - 20 of 27
- Results Per Page
- Sort Options
Item 3D shape modelling of femur(IEEE, 2017) Cherifi, Dalila; Soual, Imene; Omari, SabihaItem Abnormal tissus extraction in MRI brain medical images(IEEE, 2011) Cherifi, Dalila; Doghmane, Mohamed Zinelabidine; Nait-Ali, A.; Aici, Zakia; Bouzelha, SalimThis study is a comparison between two image segmentation's methods; the first method is based on normal brain's tissue recognition then tumor extraction using thresholding method. The second method is classification based on EM segmentation which is used for both brain recognition and tumor extraction. The goal of these methods is to detect, segment, extract, classify and measure properties of the brain normal and abnormal (tumor) tissuesItem Aerial forest smoke’s fire detection using enhanced YOLOv5(Springer, 2023) Cherifi, Dalila; Bekkour, Belkacem; Benmalek, Assala; Bayou, Meroua; Mechti, Ines; Bekkouche, Abdelghani; Amine, Chaima; Halak, AhmedForest fires around the world are the main cause of devastating millions of forest hectares, destroying several infrastructures and unfortunately causing many human casualties among both fire fighting crews and civilians that might be accidentally surrounded by the fire. The early detection of more than 58,950 forest fires and the real-time fire perception are two key factors that allow the firefighting crews to act accordingly in order to prevent the fire from achieving unmanageable proportions [1]. Forest fire detection is such a challenging problem for the current world. Traditional methodologies depend on a set of expensive hardware and sensors that might be not accurate due to some environment parameters and weather fluctuations. This paper proposes an accurate intelligent deep learning-based YOLOv5 model to detect forest fires from a given aerial imagesItem Artificial Intelligence Based Detection of COVID-19 Pneumonia Using CT Scan and X-ray Images: A Comparative study(Institute of Electrical and Electronics Engineers Inc, 2023) Ilyas, Muhammad; Cherifi, DalilaAccording to a new study, a computer program that was trained to see patterns by analyzing thousands of chest X-rays was able to predict with up to 95% accuracy which patients with coronavirus disease (COVID-19) would develop life-threatening complications within four days. In order to quickly identify patients with COVID-19 whose condition is most likely to deteriorate, hospital physicians and radiologists require tools like our program.Unfortunately, we are fighting one of the worst epidemics ever known to mankind called COVID-2019, a coronavirus-derived pathogen. We see ground-glass opacity in the chest X-ray and CT scan images as a result of fibrosis in the lungs when the virus has reached the lungs. The artificial intelligence techniques can be used to identify and quantify the infection because of the significant differences between infected and non-infected X-ray images. A classification model for interpreting chest X-rays and CT scan images is proposed, which may lead to improved COVID-19 diagnosis. Classifying the chest X-rays into three categories, normal, viral pneumonia, and COVID-19, is our method of classification. Additionally, COVID-19 using CT scan images has higher classification accuracy as compared to x-ray images.Item Brain tumor classification using convolutional neural networks and transfer learning(Springer, 2023) Cherifi, Dalila; Cherifi, Zakaria; Cherifi, ZakariaBrain tumors are one of the top causes of mortality in both children and adults across the world. Early detection of the tumor can give the patient a new chance in life to undergo effective treatment to save them. Despite the great medical and technological advances, the current test methods for diagnosing and classifying brain tumors are prone to human error, since human-assisted manual classification can result in incorrect prognosis and diagnosis. These drawbacks highlight the need of employing a completely automated system for the detection of brain tumors. The emergence of deep learning and its successes in classification of images warranted by its performance and ability to generalize on various data, led us naturally to use it to solve this problem. This work aims to be a concise exposition of deep learning architectures applied to medical imaging, with a focus on the analysis of MRI images for the automatic classification of brain tumors for the early diagnosis purposes. We consider classification as a supervised learning problem and we address it by means of Convolutional Neural Networks (CNN). Two different CNN models are proposed for two separate classifications, with changing and tuning various hyper-parameters. Two datasets were used, the first dataset of brain MRI Images provided by Navoneel Chakrabarty and the second dataset acquired from the Kaggle platform under the name BT-multiclass. The Using the first proposed model, brain tumor detection is accomplished with 91% percent accuracy. With an accuracy of 92% percent, the second proposed model can classify brain tumors into four types: non-tumor, glioma, meningioma, and pituitary. Using transfer learning, the proposed CNN models for both classifications are then compared to other popular pre-trained CNN models such as Inception-v3, ResNet-50, and VGG-16; and satisfactory findings are obtained. Thus, the inclusion of this type of methodologies favors both the patient and the physician, making it possible to carry out more precise quantitative diagnosesItem Classification of Left/Right Hand and Foot Movements from EEG using Machine Learning Algorithms(Institute of Electrical and Electronics Engineers Inc, 2023) Cherifi, Dalila; Berghouti, Baha Eddine; Boubchir, LarbiIn recent years, there has been growing interest in utilizing Electroencephalography (EEG) data and machine learning techniques to develop innovative solutions for individuals with disabilities. The ability to accurately classify hands and foot motion based on EEG signals holds great potential for enabling individuals to regain control and functionality of their disabled parts, improving their quality of life and independence. Making a better solution than the traditional ones that often require physical contact or can be challenging to operate. In our study, we have focused on hands (right/left) and foot motion disabilities, using supervised Machine Learning algorithms for the classification of EEG data related to left/right hand and foot movements; aiming to reach accurate results that can contribute to providing a solution for people with this kind of motion disabilities. Three supervised machine learning algorithms are considered for the EEG classification, namely Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM), using Common Spatial Patterns (CSP) algorithm and logarithm of the variance (logvar) for feature extraction. In our experiments, we adopted these algorithms to classify the Motor Imagery EEG dataset for hands and foot movements given in BCI Competition IV. The data we used went through different steps before fitting into the models such as filtering, feature extraction, and discrimination. We achieved significant success in accurately classifying hand movements in the initial experiment, attaining an impressive classification accuracy of up to 97.5% with SVM and LDA. Furthermore, in the multi-classification task involving both hand (right/left) and foot movements, KNN and SVM classifiers yielded commendable results up to 87%. These models can be further used and developed, where a hardware implementation will be done as a further work for this study.Item Combining improved euler and Runge-Kutta 4th order for tractography in Diffusion-Weighted MRI(Elsevier, 2017) Cherifi, Dalila; Boudjada, Messaoud; Morsli, Abdelatif; Girard, Gabriel; Deriche, RachidItem Comparative Study on Early Stage Diabete Detection by Using Machine Learning Methods(Institute of Electrical and Electronics Engineers, 2023) Cherifi, Dalila; Djellouli, Seyyid Ahmed; Riabi, Hanane; Hamadouche, MohamedThis paper introduces an innovative approach to diabetes prediction, leveraging machine learning algorithms. The study is dedicated to elevating the precision of medical examinations through the application of machine learning to electronic health records (EHRs). In our investigation of the Pima Indian dataset, we employed two distinct strategies-imputation data and, notably, the novel filtered data approach-to address missing values. Subsequently, we rigorously evaluated six supervised machine learning models, encompassing Logistic Regression, Random Forest, K-Nearest Neighbor, Support Vector Machine, XGBoost, and Cat Boost. Metrics including accuracy, precision, sensitivity, specificity, and stability were meticulously assessed. Encouragingly, we achieved a commendable 98% accuracy with the Random Forest classifier using the imputation data strategy. However, our groundbreaking contribution lies in the filtered data approach, where we achieved an equally promising 84% accuracy using the XGBoost classifier. This pivotal finding unequivocally establishes the superiority of the filtered data methodology, signifying a significant leap towards enhancing patient risk scoring systems and foreseeing the onset of disease.Item Convolution neural network deployment for plant leaf diseases detection(Springer, 2023) Cherifi, Dalila; Bayou, Meroua; Benmalek, Assala; Mechti, Ines; Bekkouche, Abdelghani; Bekkour, Belkacem; Amine, Chaima; Ahmed, HalakThe automated identification of plant diseases based on plant leaves is a huge breakthrough. Furthermore, early and accurate detection of plant diseases positively impacts crop productivity and quality. However, managing the accessibility of early plant disease detection is crucial. This work has environmental goals aiming to save plants from different threatening diseases by providing early detection of the affected leaves. We studied the performance of different Convolutional Neural Network (CNN) architectures in predicting 26 diseases for 14 plant species. The work studied the complexity of the system and compared the two main deep learning frameworks, TensorFlow and PyTorch, to get the most accurate results with higher accuracy. Using the “New PlantVillage Dataset” from Kaggle [1], the TensorFlow models achieved an accuracy of 90,94% for the basic CCN architecture, and 95,59% for the Transfer Learning architecture with VGG19. Whereas the PyTorch models achieved an accuracy of 93,47% for the basic CCN architecture, and 98,53% for the Transfer Learning architecture with ResNet34. Finally, after examining the feasibility of the model’s implementation and discussing the main problems that may be encountered, the models were deployed in a mobile application using the Tflite and torch mobile flutter SDK to let them as an internal feature in the mobile without the need for any access to the cloud, which is known as edge AIItem Covid-19 classification using deep learning(2021) Djaber, Abderraouf; Guedouar, Mohammed-Elfateh; Cherifi, DalilaABSTRACT Coronavirus disease 2019 (COVID-19) is a fast-spreading infectious disease that causes lung pneumonia which killed millions of lives around the world and has a significant impact on public healthcare. The diagnostic approach of the infection is mainly divided into two broad categories, a laboratory-based and chest radiography approach where the CT imaging tests showed some advantages in the prediction over the other methods. Due to the limited medical capacity and the dramatical increase of the suspected cases, the need for finding a quick, accurate and automated method to mitigate the overloading of radiologists’ efforts for diagnosis has emerged. In order to achieve this goal, our work is based on developing machine and deep learning algorithms to classify chest CT scans into Covid or non-Covid classes. We have worked on two non-similar datasets from different sources, a small one of 746 images and a larger one with 14486 images. In the other hand, we have proposed various machine learning models starting by an SVM which contains different kernel types, K-NN model with changing the distance measurements and an RF model with two different number of trees. Moreover, two CNN based approaches have been developed considering one convolution layer followed by a pooling layer for the first approach, then two consecutive convolution layers followed by a single pooling layer each time for the second approach. The machine learning models showed better performance comparing to the CNN on the small dataset. While on the large dataset, CNN outperforms these algorithms. In order to improve performance of the models, transfer learning also have been used in this project where we trained the pre-trained InceptionV3 and ResNet50V2 on the same datasets. Among all the examined classifiers, the ResNet50V2 achieved the best scores with 86.67% accuracy, 93.94% sensitivity, 81% specificity and 86.11% F1-score on the small dataset while the respective scores on the large dataset were 97.52%, 97.28%, 97.77% and 97.60%. Experimental observations suggest the potential applicability of ResNet50V2 transfer learning approach in real diagnostic scenarios, which could be of very high utility in terms of achieving fast testing for COVID-19.Item Covid-19 Detecting in Computed Tomography Lungs Images Using Machine and Transfer Learning Algorithms(Informatica, 2023) Cherifi, Dalila; Djaber, Abderraouf; Guedouar, Mohammed-Elfateh; Feghoul, Amine; Chelbi, Zahia Zineb; Ait Ouakli, AmazighCoronavirus disease 2019 (COVID-19), a rapidly spreading infectious disease, has led to millions of deaths globally and has had a significant impact on public healthcare due to its association with severe lung pneu- monia. The diagnosis of the infection can be categorized into two main approaches, a laboratory-based approach and chest radiography approach where the CT imaging tests showed some advantages in the pre- diction over the other methods. Due to restricted medical capacity and the fast-growing number suspected cases, the need for finding an immediate, accurate and automated method to alleviate the overcapacity of radiology facilities has emerged. In order to accomplish this objective, our work is based on developing machine and deep learning algorithms to classify chest CT scans into Covid and non-Covid classes. To obtain a good performance, the accuracy of the classifier should be high so the patients may have a clear idea about their state. For this purpose, there are many hyper parameters that can be changed in order to improve the performance of the artificial models that are used for the identification of such illnesses. We have worked on two non-similar datasets from different sources, a small one consisting of 746 images and a large one with 14486 images. On the other hand, we have proposed various machine learning models starting by an SVM which contains different kernel types, KNN model with changing the distance measure- ments and an RF model with two different number of trees. Moreover, two CNN based approaches have been developed considering one convolution layer followed by a pooling layer then two consecutive con- volution layers followed by a single pooling layer each time. The machine learning models showed better performance compared to CNN on the small dataset, while on the larger dataset, CNN outperforms these algorithms. In order to improve the performance of the models, transfer learning has also been used where we trained the pre-trained InceptionV3 and ResNet50V2 on the same datasets. Among all the examined classifiers, the ResNet50V2 achieved the best scores with 86.67% accuracy, 93.94% sensitivity, 81% speci- ficity and 86% F1-score on the small dataset while the respective scores on the large dataset were 97.52%, 97.28%, 97.77% and 98%. Experimental interpretation advises the potential applicability of ResNet50V2 transfer learning approach in real diagnostic scenarios, which might be of very high usefulness in terms of achieving fast testing for COVID19. Povzetek: Raziskava se osredotoča na razvoj algoritmov strojnega in globokega učenja za razvrščanje CT posnetkov prsnega koša v razrede Covid in ne-Covid. Rezultati kažejo, da je pristop prenosa učenja ResNet50V2 najbolj učinkovit za hitro testiranje COVID-19.Item ECG features extraction using AC/DCT for biometric(IEEE, 2017) Cherifi, Dalila; Adjerid, Chaouki; Boukerma, Billal; Zebbiche, Badreddine; Nait-Ali, AmineItem EEG signal feature extraction and classification for epilepsy detection(Slovene Society Informatika, 2022) Cherifi, Dalila; Falkoun, Noussaiba; Ouakouak, Ferial; Boubchir, Larbi; Nait-Ali, AmineEpilepsy is a neurological disorder of the central nervous system, characterized by sudden seizures caused by abnormal electrical discharges in the brain. Electroencephalogram (EEG) is the most common technique used for Epilepsy diagnosis. Generally, it is done by the manual inspection of the EEG recordings of active seizure periods (ictal). Several techniques have been proposed throughout the years to automate this process. In this study, we have developed three different approaches to extract features from the filtered EEG signals. The first approach was to extract eight statistical features directly from the time-domain signal. In the second approach, we have used only the frequency domain information by applying the Discrete Cosine Transform (DCT) to the EEG signals then extracting two statistical features from the lower coefficients. In the last approach, we have used a tool that combines both time and frequency domain information, which is the Discrete Wavelet Transform (DWT). Six different wavelet families have been tested with their different orders resulting in 37 wavelets. The first three decomposition levels were tested with every wavelet. Instead of feeding the coefficients directly to the classifier, we summarized them in 16 statistical features. The extracted features are then fed to three different classifiers k-Nearest Neighbor (k-NN), Support Vector Machine (SVM), and Artificial Neural Network (ANN) to perform two binary classification scenarios: healthy versus epileptic (mainly from interictal activity), and seizure-free versus ictal. We have used a benchmark database, the Bonn database, which consists of five different sets. In the first scenario, we have taken six different combinations of the available data. While in the second scenario, we have taken five combinations. For Epilepsy detection (healthy vs epileptic), the first approach performed badly. Using the DCT improved the results, but the best accuracies were obtained with the DWT-based approach. For seizure detection, the three methods performed quite well. However, the third method had the best performance and was better than many state-of-the-art methods in terms of accuracy. After carrying out the experiments on the whole EEG signal, we separated the five rhythms and applied the DWT on them with the Daubechies7 (db7) wavelet for feature extraction. We have observed that close accuracies to those recorded before can be achieved with only the Delta rhythm in the first scenario (Epilepsy detection) and the Beta rhythm in the second scenario (seizure detection)Item Effect of eyes and eyebrows on face recognition system performance(IEEE, 2014) Radji, N.; Cherifi, Dalila; Azrar, A.Item Fusion of face recognition methods at score level(IEEE, 2017) Cherifi, Dalila; Cherfaoui, Fateh; Yacini, Si Nabil; Nait-Ali, AmineItem Impact of spiking neurons leakages and network recurrences on event-based spatio-temporal pattern recognition(Frontiers Media SA, 2023) Bouanane, Mohamed Sadek; Cherifi, Dalila; Chicca, Elisabetta; Khacef, LyesSpiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge. However, multiple spiking neuron models have been proposed in the literature with different levels of biological plausibility and different computational features and complexities. Consequently, there is a need to define the right level of abstraction from biology in order to get the best performance in accurate, efficient and fast inference in neuromorphic hardware. In this context, we explore the impact of synaptic and membrane leakages in spiking neurons. We confront three neural models with different computational complexities using feedforward and recurrent topologies for event-based visual and auditory pattern recognition. Our results showed that, in terms of accuracy, leakages are important when there are both temporal information in the data and explicit recurrence in the network. Additionally, leakages do not necessarily increase the sparsity of spikes flowing in the network. We also investigated the impact of heterogeneity in the time constant of leakages. The results showed a slight improvement in accuracy when using data with a rich temporal structure, thereby validating similar findings obtained in previous studies. These results advance our understanding of the computational role of the neural leakages and network recurrences, and provide valuable insights for the design of compact and energy-efficient neuromorphic hardware for embedded systems.Item Impact of thatcher effect, double illusion and inversion on face recognition(IEEE, 2015) Radji, Nadjet; Cherifi, Dalila; Azrar, ArabItem Importance of eyes and eyebrows for face recognition system(IEEE, 2015) Radji, Nadjet; Cherifi, Dalila; Azrar, ArabItem Importance of eyes and eyebrows for face recognition system(IEEE, 2015) Radji, Nadjet; Cherifi, Dalila; Azrar, Arab,,CTItem Introduction to 2D face recognition(John Wiley and Sons, 2013) Naït-Ali, Amine; Cherifi, Dalila
