Publications Internationales

Permanent URI for this collectionhttps://dspace.univ-boumerdes.dz/handle/123456789/13

Browse

Search Results

Now showing 1 - 10 of 12
  • Item
    Design and implementation of a self-driving car using deep reinforcement learning: A comprehensive study
    (Elsevier, 2025) Djerbi, Rachid; Rouane, Anis; Taleb, Zineb; Saradouni, Safia
    This paper presents a groundbreaking and comprehensive study on the design, implementation, and evaluation of a self-driving car utilizing deep reinforcement learning, showcasing significant advancements in autonomous vehicle technology. Our robust framework integrates three innovative AI models for essential functionalities: road detection, traffic sign recognition, and obstacle avoidance. The system architecture, structured around a three layers “DDD” (Data, Detection, Decision) approach, involves meticulous data preprocessing for traffic signs and road data, followed by specialized Deep Learning models for each detection task, including a CNN for traffic signs, a CNN for road detection, and the pre-trained MobileNet-SSD for obstacle detection. A reinforcement learning agent in the Decision Layer processes these outputs for real-time control (steering, acceleration, braking) through a continuous learning process with environmental feedback. The research encompasses both extensive simulation in Unity, leveraging the ML-Agents toolkit for agent training across diverse environments, and crucial real-world deployment. Our reward/punishment system in the simulation environment, based on collisions with road markers and obstacles, refined the agent's decision-making. The trained AI models were successfully exported and deployed onto a physical prototype, controlled by a Raspberry Pi and equipped with a camera and ultrasonic sensors. Real-world testing affirmed the robust performance of the physical model in detecting roads, recognizing traffic signs, and effectively avoiding obstacles. Quantitative results demonstrate compelling performance, including over 90% accuracy in obstacle detection and a 15% improvement in navigation efficiency compared to traditional algorithms under controlled simulation conditions. Model evaluation metrics show a 98% accuracy, 12% loss, and a prediction rate exceeding 77%. This study not only contributes a comprehensive framework for autonomous vehicle development but also highlights the transformative potential of deep reinforcement learning for creating intelligent and adaptable autonomous systems in both virtual and real-world scenarios, paving the way for safer and more efficient transportation technologies
  • Item
    A data driven fault diagnosis approach for robotic cutting tools in smart manufacturing
    (International Society of Automation, 2025) Afia, Adel; Gougam, Fawzi; Soualhi, Abdenour; Wadi, Mohammed; Tahi, Mohamed; Tahi, Mohamed
    In smart manufacturing within Industry 4.0, tool condition monitoring (TCM) is used to improve productivity and machine availability by leveraging advanced sensors and computational intelligence to prevent tool damage. This paper develops a hybrid methodology using heterogeneous sensor measurements for monitoring robotic cutting tools with four tool states: healthy, surface damage, flake damage and broken tooth. The proposed approach integrates the maximal overlap discrete wavelet packet transform (MODWPT) with health indicators to construct feature matrices for each tool state. Feature selection is performed using the tree growth algorithm (TGA) to reduce computation time and improve feature space separation by selecting only relevant features. The selected features are input into a Gaussian mixture model (GMM) to detect, identify and classify each tool state with high accuracy. The proposed method provides a classification accuracy of 99.04 % for vibration, 95.51 % for torque, and 91.67 % for force signals. Using unseen vibration data, the model achieved a test accuracy of 98.44 %, demonstrating a high degree of generalizability. Comparative analysis demonstrates that our proposed approach provides superior feature discrimination and model stability, balancing computational efficiency and classification accuracy, validating the TGA-GMM framework as an effective solution for tool fault diagnosis in noisy, high-dimensional data.
  • Item
    Offline Arabic handwritten character recognition: from conventional machine learning system to deep learning approaches
    (2022) Faouci, Soumia; Gaceb, Djamel; Haddad, Mohammed
    Researchers have made great strides in the area of Arabic handwritten character recognition in the last decades especially with the fast development of deep learning algorithms. The characteristics of Arabic manuscript text pose several problems for a recognition system. This paper presents a conventional machine learning system based on the extraction of a set of preselected features and an SVM classifier. In the second part, a simplified convolutional neural network (CNN) model is proposed, which is compared to six other CNN models based on the pre-trained architectures. The suggested methods were tested using three databases: two versions of the OIHACDB dataset and the AIA9K dataset. The experimental results show that the proposed CNN model obtained promising results, as it is able to recognise 94.7%, 98.3%, and 95.6% of the test set of the three databases OIHACDB-28, OIHACDB-40, and AIA9K, respectively.
  • Item
    Using Machine Learning Algorithms for the Analysis and Modeling of the Rheological Properties of Algerian Crude Oils
    (Taylor and Francis Ltd., 2024) Souas, Farid; Oulebsir, Rafik
    Our research described in this report investigated the rheological behavior of crude oils from the Tin Fouye Tabankort oil field in Southern Algeria, focusing on their viscosity under varying temperatures (10 °C–50 °C). The results show that the oils exhibited non-Newtonian shear-thinning behavior at low shear rates, with the viscosity decreasing as the temperature was increased. At higher shear rates, the Herschel–Bulkley model accurately described the oils’ transition to Newtonian behavior. Machine learning models, including CatBoost, LightGBM, and XGBoost, were trained on the experimental data to predict the viscosity, with CatBoost and XGBoost showing superior performance. We suggest these findings are valuable for improving the efficiency of oil transportation and processing.
  • Item
    Development of an expert-informed rig state classifier using naive bayes algorithm for invisible loss time measurement
    (Springer Nature, 2024) Youcefi, Mohamed Riad; Boukredera, Farouk Said; Ghalem, Khaled; Hadjadj, Ahmed; Ezenkwu, Chinedu Pascal
    The rig state plays a crucial role in recognizing the operations carried out by the drilling crew and quantifying Invisible Lost Time (ILT). This lost time, often challenging to assess and report manually in daily reports, results in delays to the scheduled timeline. In this paper, the Naive Bayes algorithm was used to establish a novel rig state. Training data, consisting of a large set of rules, was generated based on drilling experts’ recommendations. This dataset was then employed to build a Naive Bayes classifier capable of emulating the cognitive processes of skilled drilling engineers and accurately recognizing the actual drilling operation from surface data. The developed model was used to process high-frequency drilling data collected from three wells, aiming to derive the Key Performance Indicators (KPIs) related to each drilling crew’s efficiency and quantify the ILT during the drilling connections. The obtained results revealed that the established rig state excelled in automatically recognizing drilling operations, achieving a high success rate of 99.747%. The findings of this study offer valuable insights for drillers and rig supervisors, enabling real-time visual assessment of efficiency and prompt intervention to reduce ILT.
  • Item
    Geological mapping using extreme gradient boosting and the deep neural networks : application to silet area, central Hoggar, Algeria
    (Springer, 2022) Elbegue, Abderrahmane Aref; Allek, Karim; Zeghouane, Hocine
    Nowadays, machine learning algorithms are considered a powerful tool for analyzing big and complex data due to their ability to deliver accurate and fast results. The main objective of the present study is to prove the effectiveness of the extreme gradient boosting (XGBoost) method as well as employed data types in the Saharan region mapping. To reveal the potential of the XGBoost, we conducted two experiments. The first was to use different combinations of: airborne gamma-ray spectrometry data, airborne magnetic data, Landsat 8 data and digital elevation model. The objective is to train 9 XGBoost models in order to determine each data type sensitivity in capturing the lithological rock classes. The second experiment was to compare the XGBoost to deep neural networks (DNN) to display its potential against other machine learning algorithms. Compared to the existing geological map, the application of XGBoost reveals a great potential for geological mapping as it was able to achieve a correlation score of (78%) where igneous and metamorphic rocks are easily identified compared to sedimentary rocks. In addition, using different data combinations reveals airborne magnetic data utility to discriminate some lithological units. It also reveals the potential of the apparent density, derived from airborne magnetic data, to improve the algorithm’s accuracy up to 20%. Furthermore, the second experiment in this study indicates that the XGBoost is a better choice for the geological mapping task compared to the DNN. The obtained predicted map shows that the XGBoost method provides an efficient tool to update existing geological maps and to edit new geological maps in the region with well outcropped rocks
  • Item
    Toward robust models for predicting carbon dioxide absorption by nanofluids
    (John Wiley and Sons Inc, 2022) Nait Amar, Menad; Djema, Hakim; Belhaouari, Samir Brahim; Zeraibi, Noureddine; https://doi.org/10.1002/ghg.2166
    The application of nanofluids has received increased attention across a number of disciplines in recent years. Carbon dioxide (CO2) absorption by using nanofluids as the solvents for the capture of CO2 is among the attractive applications, which have recently gained high popularity in various industrial aspects. In this work, two robust explicit-based machine learning (ML) methods, namely group method of data handling (GMDH) and genetic programming (GP) were implemented for establishing accurate correlations that can estimate the absorption of CO2 by nanofluids. The correlations were developed using a comprehensive database that involved 230 experimental measurements. The obtained results revealed that the proposed ML-based correlations can predict the absorption of CO2 by nanofluids with high accuracy. Besides, it was found that the GP-based correlation yielded more precise predictions compared to the GMDH-based correlation. The GP-based correlation has an overall coefficient of determination of 0.9914 and an overall average absolute relative deviation of 3.732%. Lastly, the carried-out trend analysis confirmed the compatibility of the proposed GP-based correlation with the real physical tendency of CO2 absorption by nanofluids
  • Item
    Optimization of WAG in real geological field using rigorous soft computing techniques and nature-inspired algorithms
    (Elsevier, 2021) Nait Amar, Menad; Jahanbani Ghahfarokhi, Ashkan; Ng, Cuthbert Shang Wui; Zeraibi, Noureddine
    To meet the ever-increasing global energy demands, it is more necessary than ever to ensure increments in the recovery factors (RF) associated with oil reservoirs. Owing to this challenge, enhanced oil recovery (EOR) techniques are increasingly gaining more significance as robust strategies for producing more oil volumes from mature reservoirs. Water alternating gas (WAG) injection is an EOR method intended at improving the microscopic and macroscopic displacement efficiencies. To handle and implement successfully this technique, it is of vital importance to optimize its operating parameters. This study targeted at implementing robust proxy paradigms for investigating the suitable design parameters of a WAG project applied to real field data from “Gullfaks” in the North Sea. The proxy models aimed at reducing significantly the rum-time related to the commercial simulators without scarifying the accuracy. To this end, machine learning (ML) approaches, including multi-layer perceptron (MLP) and radial basis function neural network (RBFNN) were implemented for estimating the needed parameters for the formulated optimization problem. To improve the reliability of these ML methods, they were evolved using optimization algorithms, namely Levenberg–Marquardt (LM) for MLP, and ant colony optimization (ACO) and grey wolf optimization (GWO) for RBFNN. The performance analysis of the proxy models revealed that MLP-LMA has better prediction ability than the other two proxy paradigms. In this context, the highest average absolute relative deviation noticed per runs by MLP-LMA was lower than 3.60%. Besides, the best-implemented proxy was coupled with ACO and GWO for resolving the studied WAG optimization problem. The findings revealed that the suggested proxies are cheap, accurate, and practical in emulating the performance of numerical reservoir model. In addition, the results demonstrated the effectiveness of ACO and GWO in optimizing the parameters of WAG process for the real field data used in this study
  • Item
    Robust smart schemes for modeling carbon dioxide uptake in metal - organic frameworks
    (Elsevier, 2021) Nait Amar, Menad; Ouaer, Hocine; Abdelfetah Ghriga, Mohammed
    The emission of greenhouse gases such as carbon dioxide (CO2) is considered the most acute issue of the 21st century around the globe. Due to this fact, significant efforts have been made to develop rigorous techniques for reducing the amount of CO2 in the atmosphere. Adsorption of CO2 in metal–organic frameworks (MOFs) is one of the efficient technologies for mitigating the high levels of emitted CO2. The main aim of this study is to examine the aptitudes of four advanced intelligent models, including multilayer perceptron (MLP) optimized with Levenberg-Marquardt (MLP-LMA) and Bayesian Regularization (MLP-BR), extreme learning machine (ELM), and genetic programming (GP) in predicting CO2 uptake in MOFs. A sufficiently widespread source of data was used from literature, including more than 500 measurements of CO2 uptake in13 MOFs with various pressures at two temperature values. The results showed that the implemented intelligent paradigms provide accurate estimations of CO2 uptake in MOFs. Besides, error analyses and comparison of the prediction performance revealed that the MLP-LMA model outperformed the other intelligent models and the prior paradigms in the literature. Moreover, the MLP-LMA model yielded an overall coefficient of determination (R2) of 0.9998 and average absolute relative deviation (AARD) of 0.9205%. Finally, the trend analysis confirmed the high integrity of the MLP-LMA model in prognosticating CO2 uptake in MOFs, and its predictions overlapped perfectly the measured values with changes in pressure and temperature
  • Item
    Rainfall–runoff modelling using octonion-valued neural networks
    (Taylor & Francis, 2021) Shishegar, Shadab; Ghorbani, Reza; Saad Saoud, Lyes; Duchesne, Sophie; Pelletier, Geneviève
    Rainfall–runoff modelling is at the core of any hydrological forecasting system. The high spatio-temporal variability of precipitation patterns, complexity of the physical processes, and large quantity of parameters required to characterize a watershed make the prediction of runoff rates quite difficult. In this study, a hyper-complex artificial neural network in the form of an octonion-valued neural network (OVNN) is proposed to estimate runoff rates. Evaluation of the proposed model is performed using a rainfall time series from a raingauge near a Canadian watershed. Results of the artificial intelligence-generated runoff rates illustrate its capacity to produce more computationally efficient runoff rates compared to those obtained using a physically based model. In addition, training the data using the proposed OVNN vs. a real-valued neural network shows less space complexity (1*3*1 vs. 8*10*8, respectively) and more accurate results (0.10% vs. 0.95%, respectively), which accounts for the efficiency of the OVNN model for real-time control applications