Keynotes

Ivan Chorbev

Title: Empowering Citizens: Navigating the Ethical Landscape of Big Data and AI

Bio:

Ivan Chorbev is a Full Professor and Head of the Software Engineering Department at the Faculty of Computer Science and Engineering at UKIM. His main interests and activities include Software engineering, Machine learning, ERP systems, Student information systems, Learning management systems, Assistive technologies, eHealth. He served 7 years for two terms as dean of the faculty as a legal entity with 4000 students and 100 employees. He serves as the member of the Board of Directors in the Business Accelerator UKIM representing the major shareholder – the Ss Cyril and Methodius University in Skopje (UKIM). He also serves as project and startup evaluator for the Fund for Innovation and Technological Development of North Macedonia and the Fund for Innovation of Serbia. He also works as an expert consultant for analysis and development of ICT systems, Software Information Systems, ERP systems in banks, parliaments, corporations and industries etc.

He has published more than 100 research papers in the area of machine learning, medical data mining, telemedicine, assistive technologies, software engineering, heuristic optimization algorithms, combinatorial optimization, etc. He was involved or coordinated in 20 international and national application and research projects funded by TEMPUS, FP6, H2020, Erasmus, COST, Horizon Europe. He has developed and managed digital platforms and information systems for over two decades.

Abstract:

We will delve into the critical intersection of ethics, big data, and artificial intelligence, exploring the next steps in fostering citizen empowerment. As we witness the rapid evolution of technology, it is imperative to address ethical considerations surrounding data collection, analysis, and AI implementation, Cybersecurity. This presentation will assess the current ethical landscape, emphasizing the need for transparency, accountability, and inclusivity in the development and deployment of AI systems.We will examine the potential impact of big data and AI on citizen empowerment, focusing on the opportunities and challenges that arise. The discussion will include real-world examples of how these technologies can be harnessed to enhance civic engagement, decision-making processes, and public services. Additionally, we will highlight the risks associated with data misuse, privacy infringements, and algorithmic bias, emphasizing the importance of ethical frameworks and regulations to safeguard citizen rights.
The talk will conclude with a forward-looking perspective, proposing actionable steps for policymakers, technologists, and citizens alike to collaboratively shape a future where big data and AI contribute positively to citizen empowerment. By fostering a dialogue on ethics, inclusivity, and responsible innovation, we aim to lay the groundwork for a society where technological advancements align with democratic values, ensuring that the benefits of AI are equitably distributed for the greater empowerment of citizens.

Piotr A. Kowalski

Title: Why Explainable Artificial Intelligence is Indispensable in Neural Network Tools?

Bio:

Prof. Piotr A. Kowalski holds the position of Professor at the AGH University of Krakow, working at the Faculty of Physics and Applied Computer Science, as well as at the Systems Research Institute of the Polish Academy of Sciences. He earned his Master’s degree in Teleinformatics and Automatic Control (both with honours) from the Cracow University of Technology in 2003, followed by a Ph.D. in Data Science from the Polish Academy of Sciences in 2009. In 2018, he achieved the D.Sc. (habitation) degree in Artificial Neural Networks at the Systems Research Institute of the Polish Academy of Sciences. In 2019, he was appointed as a University Professor at AGH University of Science and Technology in Krakow.
His research interests lie in the field of information technology, mainly focusing on intelligent methods such as neural networks, fuzzy systems, and nature-inspired algorithms, applied to complex systems and knowledge discovery processes. From 2018 to 2023, he served as a member of the management group and led the conference grant for young scientists within Cost Action 17124 DigForAsp (Digital forensics: evidence analysis via intelligent systems and practices), funded by the European Cooperation in Science and Technology (COST). Additionally, he has been actively involved in various research and development projects, including those funded by the Ministry of Science, the National Centre for Research and Development, and the Małopolska Centre for Entrepreneurship.

Piotr A. Kowalski is a member of the Polish Information Processing Society and the Institute of Electrical and Electronics Engineers, particularly the IEEE Computational Intelligence Society. Currently, he serves as an editor and a member of the editorial board for several scientific journals, and he is a member of the scientific committee for numerous prestigious scientific conferences. Furthermore, he is a member of the Discipline Council for Information and Communication Technology at AGH and the Scientific Council of NASK PIB. Additionally, he is a reviewer of numerous academic nominations (PhD, DSc), research grants and scientific articles, contributing his expertise to assessing scientific achievements in various fields.

Abstract:

As neural networks continue to evolve and permeate various domains, critical questions about transparency and interpretability are raised by the inherent complexity of their decision-making processes. The indispensability of Explainable Artificial Intelligence (XAI) methods in neural network tools will be shown in this presentation. The intricacies of black-box models will be delved into, illuminating the significance of understanding and justifying the decisions made by neural networks.

As neural networks continue to evolve and permeate various domains, critical questions about transparency and interpretability are raised by the inherent complexity of their decision-making processes. The intricacies of black-box models will be delved into, illuminating the significance of understanding and justifying the decisions made by neural networks. Through a comprehensive exploration of real-world applications, challenges, and emerging trends, explainability’s pivotal role in fostering trust, facilitating model validation, and ensuring ethical deployment will be underscored. By scrutinising the intersection of neural networks and explainability, the current imperatives will be addressed, and a course for future advancements will be charted, emphasising the need for a symbiotic relationship between artificial intelligence and human comprehension.

During the presentation, the critical importance of Explainable Artificial Intelligence (XAI) will be emphasised through a compelling exploration of its role in addressing challenges posed by the inherent complexity of neural networks. Concrete examples showcasing how the lack of interpretability in black box models can hinder trust, impede widespread adoption, and potentially lead to ethical concerns will be delved into. Historical and contemporary approaches designed specifically for neural networks will be discussed to underscore the urgency of implementing XAI methods. Early attempts to interpret neural network decisions, such as saliency maps and feature importance techniques, will be explored. Moreover, recent advancements in explainability, including state-of-the-art methods like LIME (Local Interpretable Model-agnostic Explanations),SHAP (SHapley Additive exPlanations), , Sensitivity methods and other procedures will be highlighted elucidating how these techniques contribute to enhanced model interpretability.

Through illustrative examples, instances where XAI methods have successfully elucidated neural network decision-making processes will be showcased. These examples will not only underscore the practical significance of explainability but also demonstrate how it can be seamlessly integrated into the development and deployment life cycle of neural network tools.