ELLIS Lightning Talks

The 19th International Conference on Information and Communication Technology in Education, Research, and Industrial Applications (ICTERI-2024) convened in Lviv, Ukraine, on September 23–27, 2024. A dedicated special track, “ELLIS Lightning Talks,” co-organized by the European Laboratory for Learning and Intelligent Systems (ELLIS), served as a dissemination platform for early-career researchers to present their recent work. The program and the six selected presentations are listed below.

Program

Session 1

Session 2

List of abstracts

Speaker: Adel Bibi, University of Oxford 

Title: Advances in AI Safety for Large Language Models

We delve into our research on AI safety, focusing on advancements aimed at ensuring the robustness, alignment, and fairness of large language models (LLMs). The talk will start with an exploration of the challenges posed by sensitivity in AI systems and strategies for providing provable guarantees against worst-case adversaries. Building upon this, we navigate through the alignment challenges and safety considerations of LLMs, addressing both their limitations and capabilities, particularly following techniques related to instruction prefix tuning and their theoretical limitations towards alignment. At last, I will talk about fairness across languages in common tokenizers in LLMs.

Adel Bibi

SpeakerAnton Bushuiev, Czech Technical University in Prague

TitleLearning to design protein-protein interactions with enhanced generalization

Discovering mutations enhancing protein-protein interactions (PPIs) is critical for advancing biomedical research and developing improved therapeutics. While machine learning approaches have substantially advanced the field, they often struggle to generalize beyond training data in practical scenarios. The contributions of this work are three-fold. First, we construct PPIRef, the largest and non-redundant dataset of 3D protein-protein interactions, enabling effective large-scale learning. Second, we leverage the PPIRef dataset to pre-train PPIformer, a new SE(3)-equivariant model generalizing across diverse protein-binder variants. We fine-tune PPIformer to predict effects of mutations on protein-protein interactions via a thermodynamically motivated adjustment of the pre-training loss function. Finally, we demonstrate the enhanced generalization of our new PPIformer approach by outperforming other state-of-the-art methods on new, non-leaking splits of standard labeled PPI mutational data and independent case studies optimizing a human antibody against SARS-CoV-2 and increasing the thrombolytic activity of staphylokinase.

Anton Bushuiev

SpeakerKateryna Zorina, Czech Technical University in Prague

TitleMulti-Contact Task and Motion Planning Guided by Video 

This work aims at leveraging instructional video to guide the solving of complex multi-contact task-and-motion planning tasks in robotics. Towards this goal, we propose an extension of the well-established Rapidly-Exploring Random Tree (RRT) planner, which simultaneously grows multiple trees around grasp and release states extracted from the guiding video. Our key novelty lies in combining contact states, and 3D object poses extracted from the guiding video with a traditional planning algorithm that allows us to solve tasks with sequential dependencies, for example, if an object needs to be placed at a specific location to be grasped later. To demonstrate the benefits of the proposed video-guided planning approach, we design a new benchmark with three challenging tasks: (i) 3D re-arrangement of multiple objects between a table and a shelf, (ii) multi-contact transfer of an object through a tunnel, and (iii) transferring objects using a tray in a similar way a waiter transfers dishes. We demonstrate the effectiveness of our planning algorithm on several robots, including the Franka Emika Panda and the KUKA KMR iiwa.

Speaker: Michal Neoral, Czech Technical University in Prague 

TitleMFT: Long-Term Tracking of Every Pixel

We propose MFT — Multi-Flow dense Tracker — a novel method for dense, pixel-level, long-term tracking. The approach exploits optical flows estimated not only between consecutive frames but also for pairs of frames at logarithmically spaced intervals. It selects the most reliable sequence of flows on the basis of estimates of its geometric accuracy and the probability of occlusion, both provided by a pre-trained CNN. We show that MFT achieves competitive performance on the TAP-Vid benchmark, outperforming baselines by a significant margin, and tracking orders of magnitude densely faster than state-of-the-art point-tracking methods. The method is insensitive to medium-length occlusions and it is robustified by estimating flow with respect to the reference frame, which reduces drift.

SpeakerDenys Herasymuk, Ukrainian Catholic University

TitleResponsible Model Selection with Virny and VirnyView

Machine Learning (ML) models are being used to make decisions in increasingly critical domains. To determine whether models are production-ready, they must be comprehensively evaluated on a number of performance dimensions. Since measuring only accuracy and fairness is not enough for building robust ML systems, each model involves at least three overall dimensions (correctness, stability, uncertainty) and three disparity dimensions evaluated on subgroups of interest (error disparity, stability disparity, uncertainty disparity). Adding to the complexity, these model dimensions exhibit trade-offs with one another. Considering the multitude of model types, performance dimensions, and trade-offs, model developers face the challenge of responsible model selection.

In this paper, we present a comprehensive software library for model auditing and responsible model selection, called Virny, along with an interactive tool called VirnyView. Our library is modular and extensible, it implements a rich set of performance and fairness metrics, including novel metrics that quantify and compare model stability and uncertainty, and enables performance analysis based on multiple sensitive attributes, and their intersections. The Virny library and the VirnyView tool are available at https://github.com/DataResponsibly/Virny and https://r-ai.co/VirnyView.

Speaker: Dmytro Fishman, University of Tartu, http://biit.cs.ut.ee

TitleBiomedical Computer Vision Lab: Computer Scientists Contributing to Medicine

This talk will cover research at the forefront of biomedical imaging data analysis, leveraging machine learning and deep learning to create state-of-the-art software solutions for diverse biomedical imaging modalities. Collaborating with academic institutions, private companies, and hospitals, we address a wide range of biomedical challenges. Our partnerships with Revvity have led to breakthroughs in microscopy image analysis, including a pioneering AI model for brightfield microscopy. In histopathology, our work with Estonian hospitals aims to automate and expedite pathology workflows, improving diagnostic accuracy and patient outcomes. Additionally, our AI research in medical imaging, in collaboration with a MedTech startup – Better Medicine, focuses on developing the General Purpose Medical AI (GPAI) system, which uses weakly supervised models to streamline radiological workflows and reduce the burden on radiologists. Our efforts are particularly impactful in Estonia, a nation with progressive digitalization policies and centralized electronic health records, as we advance the digital transformation of healthcare, ultimately improving patient care and accelerating therapeutic discoveries through innovative AI-driven analysis.

Dmytro Fishman