University of Aberdeen

Discovering the Fundaments About Why Neural Networks Are so Smart

Deadline: Open all year round
Self Funded

Project Description

Intelligence is one of the pillars that allows some animals to live so complex lives. Scientific approaches recently proposed have been capable of simulating networked systems that reproduce similar emergent manifestations of behavior as those observed in the brain. Why and how a neural network can be trained to process information and produce a logically intelligent output is a big mystery, despite the explosive growth in this area. Its success in solving complex tasks cannot today be fully explained in physical or mathematical terms. Contributing to this challenge is the grand goal of this PhD project: the creation of a general theory that describes the fundamental mathematical rules and physical laws, relevant properties and features behind the “intelligent” functioning of trained neural networks. To this goal, this project will focus in a simpler but also successful type of machine learning approach, named Reservoir Computing (RC), but that has been recently linked to computations performed in our Brain. Other approaches might also be considered. In RC, the learning phase to train a dynamical neural network to process information about an input signal only deals with the much easier task of understanding how the neural network needs to be observed, without dealing with the more difficult task of doing structural changes in it (e.g. as no deep learning). We aim at showing the roles of the network graph configuration and the chaotic and emergent collective synchronous behaviours in a dynamical network into the informational processing of an input signal leading to an intelligent response about it. We hope to find a minimal set of transformations that together can reveal the most relevant fundamental mechanisms behind the smartness of the neural network to producing a meaningful output. The fundamental outputs of this project will be further exploited into creating simpler but smarter neural networks that can process quicker more information with less computational resources.

Funding Information

This PhD project has no funding attached and is therefore available to students (UK/International) who are able to seek their own funding or sponsorship. Supervisors will not be able to respond to requests to source funding. Details of the cost of study can be found by visiting View Website.

Eligibility Requirements

Selection will be made on the basis of academic merit. The successful candidate should have, or expect to obtain, a UK Honours degree at 2.1 or above (or equivalent) major of study can apply. However, the applicant is expected to have a sufficiently good mathematical background, which can be evidenced by having graduated from a natural science or engineer degree or should be able to demonstrate mathematical expertise. It is expected that the applicant will be fluent in some suitable computing language or should demonstrate to be apt to learn one in an appropriate time scale.

Application Process

Formal applications can be completed online: https://www.abdn.ac.uk/pgap/login.php

  • Apply for Degree of Doctor of Philosophy in Physics
  • State name of the lead supervisor as the Name of Proposed Supervisor
  • State ‘Self-funded’ as Intended Source of Funding
  • State the exact project title on the application form

When applying please ensure all required documents are attached:

  • All degree certificates and transcripts (Undergraduate AND Postgraduate MSc-officially translated into English where necessary)
  • Detailed CV, Personal Statement/Motivation Letter and Intended source of funding

Informal inquiries can be made to Dr M Baptista ([email protected]) with a copy of your curriculum vitae and a brief description of why you are interested in this project. All general enquiries should be directed to the Postgraduate Research School ([email protected])

References

  • M. Lukoševičius, H. Jaeger, ” Reservoir computing approaches to recurrent neural network training”, Computer Science Review 3, 127 (2009).
  • H.-P. Ren, C. Bai, M. S. Baptista, C. Grebogi, “Weak connections form an infinite number of patterns in the brain”, Nature Scientific Reports, 7, 46472 (2017).
  • G. Tanaka et al., “Recent advances in physical reservoir computing: A review”, Neural Networks 115, 100 (2019).
  • P. T. M. Nguyen, Y. Hayashi, M. S. Baptista, T. Kondo, Collective Almost Synchronization based-model to extract and predict features of EEG signals, Scientific Reports, 10, 16342 (2020).
  • S. Krishnagopal , M. Girvan, E. Ott, and B. R. Hunt, ” Separation of chaotic signals by reservoir
    computing”, Chaos 30, 023123 (2020).
  • E. Bollt, ” On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD”, Chaos 31, 013108 (2021).
Verified by MonsterInsights