Browsing by Author "Ladjouzi, Samir"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Item Contribution à l’amélioration des performances des algorithmes à réseaux de neurones artificiels : application à des systèmes dynamiques non linéaires(Universite de Boumerdes : Faculté des hydrocarbures et de la chimie, 2021) Ladjouzi, Samir; Grouni, Saïd(Directeur de thèse)Les Réseaux de Neurones Artificiels (RNA), inspirés du comportement des neurones biologiques, forment aujourd’hui un domaine de recherche intéressant qui a su captiver beaucoup d’attention grâce à son implication dans beaucoup de domaines tel que : la classification, la prédiction, la commande,..,etc. Cependant, pour garantir l’efficacité des RNA, plusieurs paramètres doivent être pris en compte tel que : le choix de l’architecture neuronale, le nombre de couches et de neurones dans chacune d’elles, l’initialisation des poids de connexions des neurones, la sélection des données qui définissent la phase d’apprentissage ainsi que la phase de test et le type d’algorithme d’apprentissage. L’objectif de ce travail est d’apporter un modeste apport au domaine des RNA en proposant deux contributions qui se résument à : ? l’utilisation d’une nouvelle approche pour l’apprentissage d’un réseau multicouches à une seule couche cachée afin de résoudre le problème du XOR ? l’application de la notion de neurone mémoire pour modifier la structure des réseaux de neurones et l’appliquer à l’identification et au contrôle des systèmes non linéairesItem Improved Pi-Sigma Neural Network for nonlinear system identification(2017) Ladjouzi, Samir; Grouni, Said; Kacimi, Nora; Soufi, YoucefIn this paper, we propose a modified architecture of a Pi-Sigma Neural Network (PSNN) based on two modifications: extension of the activation function and adding delays to neurons in the hidden layer. These new networks are called respectively Activation Function Extended Pi-Sigma (AFEPS) and Delayed Pi-Sigma (DPS) are obtained first by adding an activation function to all hidden neurons and secondly by modifying the PSNN so its hidden layer outputs are fed to temporal adjustable units that permit to this new network to be capable to identify nonlinear systems. Architecture and dynamic equations of these networks are given in details with their training algorithm. To ensure the effectiveness of our proposed networks, examples of nonlinear system identification are provided. The obtained results show the capacity of HONNs for the nonlinear systems identification. In particular, the proposed neural architectures (AFEPS and DPS) provide better results due to the modifications made on themItem A Modified elman network with memory units for system identification(2018) Ladjouzi, Samir; Grouni, Said; Soufi, YoucefIn this paper we propose a modified Elman network structure called memory Elman neural network. The idea of this new architecture is based on adding memory units to the neurons of the classic Elman network. These memory units are trainable temporal elements that make the output history-sensitive. By virtue of this capacity, this new architecture can take into account the past information of the neurons and use them in order to accomplish the task of the network. In order to show the performance of this new network, some dynamical systems are used for identification and results are compared with the conventional Elman networkItem A neural MPPT approach for a wind turbine(IEEE, 2017) Ladjouzi, Samir; Grouni, Said; Djebiri, Mustapha; Soufi, YoucefItem A new training method for solving the XOR problem(2017) Ladjouzi, Samir; Grouni, Said; Kirat, Abderrahmen; Soufi, YoucefTraining of Artificial Neural Networks (ANN) is an important step to make the network able to accomplish the desired task. This capacity of learning in such networks makes them applied in many applications as modeling and control. However, many of training algorithms have some drawbacks like: too many parameters to be estimated, important calculus time. In this paper, we propose a very simple method to train a Single Hidden Layer Perceptron (SHLP) based on replacing the traditional ANN’s phase training by another approach called Neural Least Mean Square (NLMS) problem resolution. The key of this method is to compute some ANN’s weights by the Least Mean Square (LMS) formula, and to leave others weights to their initial values. This new training method is applied to the classical XOR problem and the results are compared with the conventional Backpropagation algorithm. The obtained results were satisfactory and the comparison made with the classical algorithm revealed that our method allowed to reduce several parameters in the learning, namely: the computation time, the overall value of the error squared, the number of iterations and the number of weights to be adjustedItem PID Control of DC Servo Motor using a Single Memory Neuron(IEEE, 2018) Ladjouzi, Samir; Grouni, Said; Soufi, YoucefIn this paper, a novel approach to determine the optimal values of a PID controller is presented. The proposed method is based on using a single memory neuron which its weights represent the PID parameters. These weights are updated by the well-known bio-inspired algorithm: the particle swarm optimization. To show the efficiency of our method, we have applied it to control a DC servo motor which is used as an actuator for an arm robot manipulator. The obtained results are compared with those a fuzzy logic controller.Item PID controller parameters adjustment using a single memory neuron(Elsevier, 2020) Ladjouzi, Samir; Grouni, SaidThe work presented in this paper presents the use of a single memory neuron to find optimal gains for a PID controller. The adopted strategy with the principal equations is discussed. The efficiency of the proposed method is shown for two usual problems which frequently occur in the industry: a single tank and a boiler and heat exchanger applications. A comparison with the Ziegler–Nichols method is presented in order to prove the effectiveness of our approachItem A Single-Neuron-Based Temperature Control of a Continuous Stirred Tank Reactor(Springer Nature, 2024) Ladjouzi, Samir; Grouni, SaidIn this paper, a new technique to determine the best values of a PID controller is presented. The proposed scheme is based on using a single-neuron controller which its weights represent the PID parameters. Weight’s adjustment is accomplished with a recent meta-heuristic algorithm called the DragonFly Algorithm. To show the effectiveness of our method, we have applied it to control a Continuous Stirred Tank Reactor. The obtained results are compared with several algorithms: the Ziegler–Nichols, Genetic Algorithm, and Particle Swarm Optimization.
