Topic title |
Possible scientific supervisors |
Source of funding |
Research and implementation of methods for brain-computer interface on the base of deep neural networks
|
prof. dr. Vacius JUSAS |
state-funded |
Research Topic Summary.
Human-computer interface is the form of communication between a user and computer, when the user issues commands for computer in natural form. Such an interface can be implemented using various forms depending on the purpose and user requirements. Natural and user-friendly forms of the implementation shorten time of the user learning to use the implemented tool.
Brain – Computer interface is a system that converts the brain activity into the control signals for a computer or some other electronic device. Such a system has large field of application ranging from the disabled persons to playing games. The clear relationship between the brain activity and the commands to muscles is not disclosed yet. Nevertheless, it is possible using mathematical methods to analyse the brain activity and to form the control signals for the external electronic devices. For this purpose, the method is needed to scan and to save the brain activity. The formation of electroencephalogram (EEG) is one of such methods. This is a non-invasive investigation of bioelectrical brain activity. The measured signals of brain activity are non-stationary and non-linear and they are directly dependent of the particular person. Consequently, it is difficult to distinguish the effective control signals. For this purpose, the feature extraction and classification methods are used. The methods of neural networks are dominant among classification methods. The deep neural networks present very good results for image recognition and classification. The signals of brain activity can be presented in the form of image. The research in this filed is carried out worldwide, however, there is always room for the improvement of the obtained results.
The purpose of the investigation is to apply the deep neural networks to recognize and classify the signals of brain-computer interface.
|
Research and implementation blockchain-based reputation model of professor
|
prof. dr. Vacius JUSAS |
state-funded |
Research Topic Summary.
Blockchain technology enables recording of business transactions online in a secure, transparent, efficient and auditable way. This technology makes it possible to create truly autonomous smart devices that can exchange data (including business data) without the help of a centralized intermediary. Eliminating intermediaries would reduce the costs of many public services, increase the efficiency of the sharing economy and e-government, and could contribute to the implementation of a smart city vision.
Blockchain technology allows the creation of chains of value, i.e. a complete sequence of business processes that enable an enterprise to bring a particular service or product to market and receive remuneration in a safe, efficient, and undeniable manner. An essential element of such chains of value is Smart Contracts, i.e. autonomous agents (programs) operating in blockchains and ensuring that business rules are followed in business transactions. Smart contracts can scan data, perform the necessary calculations according to a specified algorithm and protocol, and pass the results further down to smart contracts in blockchains without human intervention, only in accordance with the specified business rules. The practical implementation of blockchain technologies would allow for an innovative transformation of the public services and management sector. One of the small particles of the public sector is university professors. There are very different requirements for the qualification of professors. The submission of documents proving the fulfillment of the requirements and the fulfillment of those requirements are supervised by the administrative staff. Although the requirements are only increasing and covering more and more areas, they do not cover all the areas in which professors carry out their activities. Therefore, they remain underestimated. Professors could supervise the professors without an aid of the administrative staff. The blockchain technology is perfect one for this purpose.
The aim of the research is to apply the blockchain technology to the development of a professor’s reputation model.
|
Artificial intelligence based model for predicting symptoms of early Dementia though the analysis of multimodal feature signals
|
prof. dr. Rytis MASKELIŪNAS |
state-funded |
Research Topic Summary.
A large proportion of our society is at risk of developing senile dementia, or at least being affected by some of the symptoms. Dementia is being diagnosed at younger and younger ages every year. The aim of this thesis is to address the challenge of early diagnosis of dementia, which is particularly relevant at an early stage of the disease, resulting in the greatest (if diagnosed timely) opportunity for individuals to delay the prognosis of the disease through a limited cycle of interventions and drug treatment. Challenges include the development and training of machine learning algorithms to predict symptoms in the early stages of dementia, based on a range of multimodal data such as EEG, ECG, eye pupils, fMRI, cognitive tests, etc.
|
Unsupervised, Deep Learning-Based model for Detection of Failures in Industrial Machinery
|
prof. dr. Rytis MASKELIŪNAS |
state-funded |
Research Topic Summary.
Despite spreading popularity, AI-based predictive maintenance is still not fully exploited in industrial plants worldwide. This is mainly because training predictive maintenance algorithms requires a large number of failure examples (known data), which is not always possible. For this reason, the aim of this thesis is to develop an efficient unsupervised predictive maintenance approach based on deep learning algorithmsfor the prediction and forecasting of failures in industrial machinery.
|
Anomaly detection algorithms based-on unsupervised learning methods
|
doc. dr. Agnė PAULAUSKAITĖ-TARASEVIČIENĖ |
state-funded |
Research Topic Summary.
Data quality is one of the most important factors in the training process of machine learning (ML) algorithms. A number of data studies have been performed, focusing on the problem how the data annotation quality can influence an algorithm’s performance and its output, what are the most common annotation errors and future challenges that highlight the disproportionate growth in the amount of data (big data problem as well) and the workforce of experts. Despite the best efforts to automate this exhaustive data annotation process, it is still partially (or fully) a manual task, performed by humans with different backgrounds, from intermediate to annotation experts, so it is obvious that most of annotation errors are caused by humans.
Thus, successful data annotation is a workforce challenge for two reasons: the ability to recruit and maintain sufficient staff to handle large amounts of unstructured and unlabeled data; and the ability to ensure the high quality of annotation that requires additional testing, hierarchical inspections, and so on. Such problems are most commonly encountered in activities and processes that require the detection of anomalies. This includes industries ranging from manufacturing, medicine, security to agriculture, where automatic anomaly detection is a priority to ensure the efficiency of production processes, and a high level of digitization. Such need is most pronounced in production processes where a timely response to technological process deviations(anomalies) is necessary to ensure self-monitoring with operational efficiency, fast reaction, automatic decision making. Poultry meat production process is one example of such complex systems in which it is important to ensure the health of broilers in order to maximize production. Chickens health depends not only on the quality of the feed supplied, but also on the conditions of the rearing environment, such as litter, humidity, water pH, farm air quality, air component ratio, and so on. During the production of poultry meat it is necessary to monitor each parameter and detect abnormal deviations. The development of such algorithms and methods requires the resolution of emerging scientific uncertainties. It is not clear which methods of signal processing and artificial intelligence can effectively and reliably detect anomalies in digital; what methods of artificial intelligence and optical measurement technologies could be used to measure the normal and anomalous behavior; and whether existing unsupervised learning algorithms are sufficient to accurately detect anomalies. With the innovative solutions and AI algorithms for autonomous anomaly detection, it is possible not only to address the problems of the poultry sector, but also the problems of production of other products (not only food), security and health sector challenges.
The main goal - to create an intelligent system based on unsupervised learning algorithms for accurate anomalies detection in the digital signals of a complex processes.
|
Self-supervision for weakly-supervised medical image segmentation
|
doc. dr. Tomas Iešmantas |
state-funded |
Research Topic Summary.
Deep learning holds the promise of great success in precision health, enabling faster and more accurate diagnosis of a wide range of diseases. The application of deep learning methods is much further ahead in other areas of computer vision than in the field of medical decision support. The main culprit for this lag is the lack of large samples of labeled. To have an accurate model, a large labeled training sample should be obtained: this is very costly or not even possible in medicineExperts have to manually label each image of many patients and this requires a lot of man-power, money and time. One can rarely use crowd-sourcing approaches to label medical images: a nontrained person cannot accurately interpret medical images. However, hospitals manage huge samples of unlabeled or partially labeled images of patients. Weakly supervised learning is an active research area and might be a partial remedy to the above-mentioned problems. However, most of the methods are developed and tested on non-medical images. There is a research gap in this area, and this project aims to address part of it.
Recently, it has been shown that self-supervision can speed up convergence of training process, also supervised-learning tasks then requires far fewer annotated training samples to reach an equivalent performance. We hypothesize that the effect would be similar in weakly-supervised segmentation and object detection. In other words, self-supervision would strengthen weak labels, and overall effect would be higher accuracy of the final weakly supervised segmentation model.
The main objective of this project is as follows: the development of self-supervision methodology for weakly supervised segmentation of medical images.
Improving weakly supervised methods would likely result in a faster adoption of deep learning technologies in the medical decision making process. This would shorten the time between the first visit and the diagnosis and potentially reduce mortality or various morbidities. In other words, weakly supervised learning has a potential to improve early diagnostics.
|
Hierarchicity-based (self-similar) heuristic algorithms for combinatorial optimization problems
|
prof. dr. Alfonsas MISEVIČIUS |
state-funded |
Research Topic Summary.
Optimization methods and algorithms are an important constitutive part of the computer science and also artificial intelligence. In particular, the heuristic algorithms play a very significant role, just to mention local search (LS), tabu search (TS), genetic algorithms (GA), (evolutionary) population-based algorithms and their numerous combinations (hybrids). The high efficiency of the combined heuristic algorithms by solving various optimization problems was empirically confirmed several decades ago. The further demand of such algorithms was stimulated by the growth of research and development (R&D) along with the newly arising tasks. This has become even more actual in recent years.
The current development of heuristic algorithms is continued by not only combining different algorithms, but also by exploiting and expanding the inner structure (architecture) of the algorithms. One of the promising directions in this area is the use of so-called hierarchically-structured (hierarchicity-based) (or simply hierarchical) heuristic (HH) algorithms. The central idea behind the HH algorithms is the multiple adoption/utilization (reuse) of the well-known heuristics (like LS, TS, GA). This in connection with what is known as self-similarity ? this means that an object (in our case, algorithm) is exactly or approximately similar to constituent parts of itself. The principle of self-similarity is the universal principle; so, we conjecture that for both the computational methods and algorithms, this principle may also be deeply inherent and important. This idea is not very new, and some examples of hierarchical-like algorithms have been already investigated (e.g., iterated/hierarchical local search, master-slave genetic algorithms). Still, this is an area of active and progressing research. Further computational studies are required for accelerating research and to reveal higher potential of the HH algorithms, as well as the new enhancements (hybrid HH algorithms, population-based HH algorithms, etc.). The following are the particular stages of the research process:
- analysis of design and computational implementation of the HH algorithms;
- empirical testing and comparison of different variants/modifications of the HH algorithms;
- applications to real-world problems (with the focus on, among others, the virtual reality and augmented reality).
|
Geometric iterative methods for data approximation using adaptive hierarchical splines
|
doc. dr. Svajūnas Sajavičius |
state-funded |
Research Topic Summary.
Many traditional methods for data interpolation and approximation rely on the solution of global linear systems. This feature severely limits the applicability of those methods since even local modifications require a repetitive solution of large-scale linear systems.
In recent years, geometric iterative methods have been used increasingly for data interpolation and approximation. These methods have an intuitive geometric meaning, are easily implemented, and allow easy fulfilment of various geometric requirements. Currently, such methods are used in computer-aided geometric design, image and surface reconstruction tasks, reverse engineering, etc.
To date, very few studies have been dedicated to investigating the possibilities of applying hierarchical spline technologies in geometric iterative methods. The uniqueness of hierarchical splines is the property of local adaptive refinement. In practical applications, the local adaptivity allows reducing the amount of the required computational resources and opens up possibilities for solving much more complex tasks.
The main aim of the research will be the development and investigation of geometric iterative methods for data approximation, which will efficiently exploit hierarchical splines technologies. The tasks will include the construction and software implementation of adaptive geometric iterative methods, as well as their theoretical and experimental analysis.
|
Design and analysis of long series dependence estimators
|
doc. dr. Tomas RUZGAS |
state-funded |
Research Topic Summary.
During the past decades, due to the fundamental discoveries in the field of molecular biology, it has become a central subject of biology sciences. Previous focus of ttention has been shifted from the identification of one specific gene to greater opportunities that have become possible by the sequencing of complete genomes. That, in turn, opened the door to the technologies of the so-called post-genomic era. They are often based on the computer analysis of the entire genome.
Even the short sequences (genitive case) are used in bioinformatics. Genome of the simple life form (virus) can be very big and can exceed 3.5×105 nucleotides. The number of bacteria genomes varies from 0.5×106 to 10×106 nucleotides. Human genomes have about 3.12×107 nucleotides. It is a big issue to visualize the data of such size or its statistics. Particular maps of genomes are structured in order to simplify the research. One can have a look at the fragment of sequence when operating, but you can’t see the whole, and the attributes that have only particular sequence and visually separate it from the other sequences with different attributes. The short sequences (e.g. genes or proteins) are often compared interdependently, but the results of comparison are hardly interpreted if there are plenty of them.
Musical compositions are arranged from the special information units, i.e. sequence of musical notes. Their length in comparison to the length of nucleotides’ sequence is considerably shorter. The biggest classical music composition Ludwig van Beethoven Symphony No. 9 in D minor takes almost 70 minutes and has about 3,8×104 notes. There are also some even bigger compositions, e.g. Frederic Rzewski The Road is supposed to be one of the longest piano solo’s lasting for about 10 hours, or Jacob Mashak Beatus Vir for two pianos takes 11 hours. But even the note sequences with the rank of 103 are not easy to analyse and to compare in large variety of note sequences. Furthermore, the variety of different notes, compared to four nucleotides requires bigger area of more dimensions. Different length of notes makes this task even more complicated.
The research goal is to create knowledge about autocorrelation methods of long series that are effective in the case of non-stacionarity. In order to achieve this goal, it is necessary to study mathematical estimators based on the non-parametric analysis.
|
Design and analysis for statistical hypotheses tests
|
doc. dr. Tomas RUZGAS |
state-funded |
Research Topic Summary.
Hypothesis testing is one of the essential branches of data mining, which is based on solving many other tasks (discriminant analysis, image recognition, etc.). The methodology for testing goodness of fit, homogeneity, symmetry and independence hypotheses is gaining more and more attention on newly emerging areas of application: analysis of genetic information processing, analysis of objects of astronomy, analysis of computer technology and its periphery data and etc. Although there are many criteria for checking hypotheses, various authors offer ever new ideas (Nguyen, 2017; Arnastauskaitė, Ruzgas, Bražėnas, 2021). The hypothesis test statistics used in scientific works use a class of probabilistic measurements of many qualities, N-metrics (Klebanov, 2005). The planned research in this dissertation will allow to extend the application of N-metric theory to constructing hypothesis testing statistics.
The PhD student of this dissertation have been able to offer N-metric theory based criteria and compare them with some of the classic criteria. Along with the theoretical results, the proposed N-metric type criterion is examine using the Monte Carlo method. To analyze the criteria of homogeneity, symmetry and independence of simple and multiple goodness of fit hypotheses in one-dimensional and multidimensional cases. Examine a wide range of alternative hypotheses. Finally, the algorithms developed should be verified by applying them to real data used in empirical studies (Karpusenkaite, Ruzgas, Denafas, 2016, Milonas, Ruzgas, Venclovas, Jievaltas, Joniau, 2021).
|
Creation and investigation of the payment system based on the blockchain technology with the elements of non-commuting cryptography
|
doc. dr. Aleksejus MICHALKOVIČ |
state-funded |
Research Topic Summary.
Blockchain technologies are widely used in the modern world. Surely, almost every adult has heard of Bitcoin cryptocurrency. This technology is also useful for banks, where it can be implemented in the development of payment systems.
Cryptography has a major role in this development. Using cryptographic algorithms blocks are linked into a chain. These algorithms must be cryptographically secure to withstand quantum cryptanalysis. However, algorithms used nowadays do not provide sufficient protection from such attacks. Moreover, quantum algorithms to be applied for the analysis of this technology are known.
Hence it is time to take the next step forward i.e., implement the elements of non-commutative cryptography in blockchain technology. Relying on the previously published results of the research of our group we plan to propose a payment system based on quantum cryptanalysis-resistant blockchain technology.
|
Deep learning assisted subsurface data lab
|
doc. dr. Mayur Pal |
state-funded |
Research Topic Summary.
Subsurface flow modelling for various applications like contaminant transport, nuclear waste disposal, geothermal flows, carbon capture sequestration, and hydrocarbon flows involves use of numerical simulation tools. The computational domains for such flow problems are often very large running into several Km. and application on large problem domains often running into KM scale. Although the numerical simulation tools can capture the flow-physics accurately, they are computationally very expensive due to Multiphysics and Multiscale nature of the flow problems.
With recent advancement in deep learning-based approaches, it has been demonstrated by researchers at Virginia Tech and MIT that it’s possible to create data driven models for solving PDE problems using deep learning convolutional neural networks. Research work has also been carried out by researchers at Dept. Of Mathematical Modelling at KTU, which has demonstrated that it is possible to use deep learning methods for solving problems associated with hydrocarbon flow modelling.
The main objective of this research will be deep learning-based methods for development of a general-purpose toolkit titled “deep learning assisted subsurface data lab”, which will help to investigate the possibilities of extending deep learning methods for solving subsurface flow problems for variety of applications like contaminant transport, nuclear waste disposal, geothermal flows, carbon capture sequestration, and hydrocarbon flows.
An open-source toolkit for data driven modelling of subsurface flows will be developed. At least four papers published in International Journals with Impact Factors during the four-year term of doctoral studies.
|
Reactive transport modelling in presence of discrete fracture networks in porous media
|
doc. dr. Mayur Pal |
state-funded |
Research Topic Summary.
Fractures commonly occur in many reservoirs (typically in Carbonate and tight formations) and can have a dominant effect on flow and transport in case of hydrocarbon production, CO2 sequestration, contaminant transport, nuclear waste disposal, and pesticide transport. Earlier models have been able to capture some fracture effects on flow, but have suffered from model assumptions, such as idealized well-connectivity and convenient alignment with grids selected for modeling.
Current models treat the matrix and fracture as different media communicating through mass exchange factors. Model assumptions that are used in practice are typically dual porosity and dual permeability models (Dual continuum models) that yield large errors O(1) on the estimated recovery factors due to the inherent conceptual limitations of these models. The dual continuum models introduce extra degrees of freedom through the decoupling of the matrix and fracture media and use of exchange factors that are process and geometry dependent.
Discrete fracture network flow simulator together with a highly applicable grid generation tool that will apply directly for fracture modeling at the DFN scale and at the Reservoir scale for difficult reservoir features such as deviated well modeling and faults. The grid generation development will initially involve 2-D grid generation involving mixed cell types (triangles and quadrilaterals), then after extensive testing be extended to 3-D. The simulator will have the capabilities to perform single-phase flow simulations, and upscaling porous media properties.
The discrete fracture network (DFN) models developed by various researchers so far have
proposed explicit coupling of fracture flow with reactive transport modelling (RTM). Implicit reactive transport coupling is an active area of research and building reactive transport modelling capabilities within DFN modeling tool will lead to unique development. The new functionality will consider various reactions in the aqueous phase between chemical species in the aqueous phase and minerals, coupled to transport processes in the DFN simulator. The main driver for implementing RMT is primarily focused for positioning the DFN simulator with unique capability to asses’ long-term fate of CO2 sequestration, Hydrogen storage, Gas Storage, contaminant, and pesticide transport in fractured porous medium.
|
Computer vision algorithms for solving object shape recognition problems
|
doc. dr. Armantas OSTREIKA |
state-funded |
Research Topic Summary.
Computer vision is an interdisciplinary field that deals with computer algorithms for gaining high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that can interface with other action processes and suggest an appropriate action. Machine vision systems are used in a variety of applications, including manufacturing, medicine, traffic monitoring, and security systems.
In industrial processes, cameras capture important information, store and archive it, and allow users or software to make decisions based on the image information. Machine vision systems can measure and count products, calculate their weight or volume, and inspect goods at top speed with respect to their predefined characteristics. They automatically extract limited, but crucial, information from huge quantities of data, or they help experts interpreting images by filtering, optimizing, supplementing, or quickly retrieving and making them available.
Artificial intelligence algorithms have recently been an integral part of solving computer vision problems. The proposed research will aim to improve existing common methodologies by focusing on services or production real-world processes for greater efficiency and reliability, adapting and improving machine learning methods in Decision trees, K nearest neighbors, Na?ve Bayes, Support vector classifiers, Direct propagation neurons networks and Convolutional neural networks.
In this research, we expect to overcome the operative restrictions that hamper the adoption of new algorithms. It is expected to modify existing methodologies and implement novel research ideas based on artificial intelligence and new heuristic approaches.
|
Confidential and verifiable transactions system in blockchain
|
prof. dr. Eligijus SAKALAUSKAS |
state-funded |
Research Topic Summary.
Recently blockchain technologies are actively penetrating in business processes monitoring and control due to their transparent security against cheating. The main platforms for blockchain developments are IBM Hyperledger Fabric (IBM HF) and Ethereum blockchains. But at the same time they do not provide confidential and verifiable transactions to anybody in the network. In Ethereum transaction amounts are open. In IBM HF transactions amounts could be available only for Permissioned Users or their are open for everybody. In business processes it is important to hide transaction amounts due to possible discounts and other privileges for certain users. In addition business processes must be transparent by providing authenticity of business operations. If these conditions hold then the trusted Brokerage Service can be created to involve the potential stakeholders.
Research objectives are to create authentic and confidential/verifiable transactions using new approach to cryptographic protocols creation based on author’s proposed so called matrix power function (MPF).
MPF can be assumed as conjectured one-way function since, as shown in the results published by the author and his scientific group, its inversion corresponds to NP-Complete problem. It is assumed so far that cryptographic protocols based on NP-Complete problems are resistant against quantum cryptanalysis.
The main tasks:
1. To estimate the existing results in creation of confidential/verifiable transactions and their limitations.
2. To propose a new approach for realization of confidential/verifiable transactions using traditional cryptographic methods and methods based on MPF approach.
3. To propose new methods for authentication of business actors using identification and e-signature protocols based on MPF approach.
4. To investigate proposed authenticated and confidential/verifiable transactions security against quantum cryptanalysis and define their suitability in so called post quantum security era.
|
New algorithms for the stabilization and control of chaotic systems
|
prof. habil. dr. Minvydas Kazys RAGULSKIS |
state-funded |
Research Topic Summary.
The importance of the project is firstly predefined by the fact that most real-world systems are nonlinear systems. One of the main features of any nonlinear system is its ability to exhibit complex reactions to simple excitation. The main objective and the importance of the planned research program is predetermined by the ability to implement the newly developed predictive control algorithms for chaotic nonlinear systems.
The scientific goals of the research program are grouped into several work packages. First, it is necessary to develop techniques and algorithms for the identification of the governing mathematical models from the digital signature of the systems. The theory of H-ranks will be developed and applied for the description of multiple nonlinear systems interacting in complex networks with time delays and fractional order derivatives.
The second work package will comprise the development of novel stabilization techniques (including finite-time stabilization) for the control of partial clusters, self-synchronization, and temporary divergence of those clusters. The third work package will solve problems related to the stabilization (including finite-time stabilization) of the whole network, the re-organization of the network to the target structures, and the control of those structures.
The expected outcome of the project are at least several papers in Q1-Q2 International Journals, presentations at International Conferences, the participation at research projects, applications in modelling and investigation of the dynamics of the human cardiovascular system.
|
Multicriteria decision making in finance using explainable artificial intelligence
|
doc. dr. Audrius KABAŠINSKAS |
state-funded |
Research Topic Summary.
Artificial intelligence is one of the hot topics of the early 21st century in the physical and technological sciences. Large-scale investors such as pension funds, banks and investment companies have been using artificial intelligence principles for some time, however, most of techniques they use are limited to machine learning or artificial neural networks. The resulting solutions are difficult to explain and interpret. Moreover, the worst thing is that it is often it is nearly impossible to verify if the resulting solution is exactly what we aspired and whether it is optimal. Because investment decisions usually depend on many factors, such as expected return, risk, ethics, investor's risk profile and personal characteristics, all of these must be taken into account when developing an autonomous system or robo-advisor. Quite often such problems are quite easily solved by conventional methods, however, as the amount of information and number of criteria increases, the complexity of the task can become unbeatable even for supercomputers. It is therefore necessary to develop new methods based on artificial intelligence that are understandable and explainable and that their results can be validated.
Therefore, the main task is: to create artificial intelligence based methods that are explainable, transparent and interpretable for most of the investment decision makers.
Related ongoing projects:
• „A FINancial Supervision and TECHnological Compliance training programme – FIN-TECH“, Nr. H2020-ICT-2018-2, 2019–2021, https://www.fintech-ho2020.eu/ .
• „Fintech and Artificial Intelligence in Finance – Towards a transparent financial industry“, Cost Action 19130, 2020-2024, https://fin-ai.eu/
• DyMoDiF – dynamic models for digital finance, Czech Science Foundation, 2019–2023
|
Machine learning and information integration research for disease prediction and health care
|
prof. dr. Robertas ALZBUTAS |
state-funded |
Research Topic Summary.
The objective of the research is creation of methodology and test calculations based on machine learning and information integration for the most accurate disease prediction and health care.
Tasks:
1. Review and compare possibilities of information integration methods and algorithms as well as application of related software and machine learning methods intended for health care.
2. Define information integration and smart systems accuracy criteria and their evaluation procedures in the context of data relevant for health care.
3. Develop and demonstrate the information integration tools which usage increase the accuracy and / or reduce the risk of incorrect decisions.
4. Perform the pilot studies of smart systems intended for health care and create methodology for effective application of these systems.
For further information please contact the supervisor of the topic.
|
Artificial intelligence for monitoring and risk analysis of battery aging
|
prof. dr. Robertas ALZBUTAS |
state-funded |
Research Topic Summary.
More accurate estimation of battery State of Charge (SOC) and State of Health (SOH), as well as optimization of charging cycles and overall battery usage, could benefit from decision support tools that better (faster and more precise) utilize and translate the information held in signals coming from Battery-Management System (BMS).
The objective of the research is to develop a method and system, used to support BMS, able to autonomously monitor battery SOH and estimate the risk of different types of malfunctions and which will provide easy to interpret recommendations and decisions to the battery usage.
Tasks and expected results:
4. Data collection and development of deep neural network architectures for SOH monitoring (identification, characterization, and classification) and dealing with the data heterogeneity as well as time-dependence.
5. Application and testing of developed models and risk analysis approach for the case of various data (e.g. sample of annotated BMS data), taking into consideration data transformation optimization and calibration results.
6. Software development, for deep learning of SOH monitoring, when taking into consideration data heterogeneity and risk analysis. The software will be released under the GNU General Public License for the free usage of the interested parties.
For further information on the topic, relevant R&D&I work please contact the supervisor
|