PhD Defence & Symposium: 15 January

I’m happy to announce I will defend my PhD dissertation called “Optimisation in Neurosymbolic Learning Systems” on the 15h of January, 2024. The defence will start at 13:45 in the Auditorium of the Vrije Universiteit Amsterdam. There will be a livestream.

Before the defence, we will organise a mini-symposium on Probabilistic Learning and Reasoning from 10:00-12:00 in NU-4B43 (NU building of the VU Amsterdam), with lunch provided afterwards. The program:

  • 10:00 – 10:35 Sebastijan Dumancic (TU Delft) Neuro-symbolic language: a missing component
    Bio. Sebastijan Dumancic is an assistant professor at TU Delft with a focus on using programming languages to represent what agents know and how they act. To this end, he has worked on program synthesis and probabilistic programming. During his PhD and postdoc at DTAI Leuven, Sebastijan worked on the popular probabilistic neurosymbolic framework DeepProbLog. His current interest is creating machines that learn from minimal data and deploy their knowledge in novel situations.

    Abstract. Neuro-symbolic AI aims to unify the learning and reasoning paradigms of AI. The promise of this unification is great — no autonomous agent can fully function without both. Despite significant progress recently, I will argue in this talk that we have not paid enough attention to the critical component of this new field: what is the neuro-symbolic language? I will offer mostly questions, very little answers, and will outline some of our recent progress along those lines.
  • 10:35 – 11:10 Efi Tsamoura (Samsung AI Center) On Learning Latent Models with Multi-Instance Weak Supervision.
    Bio. Efi Tsamoura is a Senior Researcher at Samsung AI, Cambridge, UK. In 2016 she was awarded an early career fellowship from the Alan Turing Institute, UK, and before that, she was a Postdoctoral Researcher in the Department of Computer Science of the University of Oxford. Her main research interests lie in the areas of logic, knowledge representation and reasoning, and neuro-symbolic integration. Her research has been published in top-tier AI and database venues (SIGMOD, VLDB, PODS, AAAI, IJCAI, etc.). Efi started the Samsung AI neuro-symbolic workshop series “When deep learning meets logic”.

    Abstract. We consider a weakly supervised learning scenario where the supervision signal is generated by a transition function σ of labels associated with multiple input instances. We formulate this problem as multi-instance Partial Label Learning (multi-instance PLL), which is an extension to the standard PLL problem. Our problem is met in different fields, including latent structural learning and neuro-symbolic integration. Despite the existence of many learning techniques, limited theoretical analysis has been dedicated to this problem. We provide the first theoretical study of multi-instance PLL with possibly an unknown transition σ. We make minimal assumptions on the data distributions. In fact, we prove learnability even under the “toughest” distributions that concentrate their mass on a single instance. In addition, we provide learning guarantees under widely used surrogate losses for training classifiers subject to logical theories. We are the first to provide this theoretical analysis, closing a gap in the neuro-symbolic and latent structural learning literature. This work was presented in NeurIPS 2023: https://arxiv.org/abs/2306.13796
  • 11:10 – 11:25 Break
  • 11:25 – 12:00 Cassio de Campos (TU Eindhoven) Credal Models for Uncertainty Treatment
    Bio. Cassio de Campos is a full professor at the TU Eindhoven, working on the foundations of artificial intelligence and machine learning. Cassio works on the development of probabilistic models and on the robustness and reliability of machine learning algorithms. In particular, he argues for using credal (sum-product) networks for learning and reasoning. He obtained his PhD from the University of Sao Paulo, Brazil and was the program chair of the UAI conference in 2022.

    Abstract. There is a trend on reevaluating artificial intelligence (AI), its advancements and their implications to society. Uncertainty treatment plays a major role in this discussion. This talk will hopefully convince you that we can make AI more reliable and trustworthy by a sound treatment of uncertainty. Uncertainty is often modelled by probabilities, while it has been argued that some broadening of probability theory is required for a more convincing treatment, as one may not always be able to provide a reliable probability for every situation. Credal models generalize probability theory to allow for partial probability specifications and are arguably a good direction to follow when information is scarce, vague, and/or conflicting. We will discuss credal approaches from simple examples to sophisticated credal machine learning models.
  • 12:00 – 13:00 Lunch (provided)
  • 13:45 – 15:15 PhD defence Emile van Krieken (Auditorium of VU Amsterdam): Optimisation in Neurosymbolic Learning Systems
  • 15:15 – 18:00 Reception (Bar Boele)