A meeting of the SEMTL community will be held on Thursday, Feb 5th, 2026 at 09:00. It will take place at ÉTS, Salle Vidéotron (E-2033).
Registration
Please RSVP using this form.
Program
09:00-10:00: Keynote by Prof. Ulrich Aïvodji (ÉTS): From Explanations to Exploits: Leveraging Causally-Constrained Counterfactuals to Evade ML-based Intrusion Detection Systems
- Network Intrusion Detection Systems (NIDS) are essential for safeguarding Internet of Things (IoT) environments. Machine learning models, increasingly used in these systems for their high performance, are nevertheless vulnerable to adversarial perturbations. Existing work demonstrate the susceptibility of ML-based NIDS to attacks, yet many suffer from two main issues: reliance on unrealistic attacker assumptions (e.g., full access to the model) and the generation of malicious traffic that disregards inherent feature dependencies, making such traffic easily detectable or semantically invalid. Our objective is to determine whether adversarial attacks against NIDS remain effective under realistic constraints, namely, limited attacker knowledge and the requirement to preserve the causal feature dependencies and the attack functionality after perturbation. We propose a method that leverages counterfactual explanations to generate adversarial examples against NIDS. While counterfactuals have not been previously explored in this context, they provide a simple and practical way to craft realistic attacks under limited knowledge of the model. In addition, they offer a flexible framework where attacker knowledge constraints can be easily incorporated, making them well-suited for evaluating adversarial robustness under realistic scenarios. We also highlight that existing adversarial methods generally ignore network feature interdependencies, which undermines the realism of generated traffic. To address this limitation, we propose a method that explicitly enforces structural causal constraints, ensuring that perturbations remain consistent with the true dependencies in the data. Experimental results demonstrate that the proposed method can achieve more than 80% of evasion rate in a realistic grey-box scenario (+ 40% from the state of art model), 64% in a black-box scenario (+ 20% from the state of art model), and around 50% when enforcing causal constraints without impacting the functionality of the attack, which, to our knowledge, has not been evaluated in prior work.
10:00-10:30: Coffee Break
10:30-11:30: Paper presentations
- Guillaume Cantin (Nantes Université) - Statistical Model Checking for learning the parameters of a mechanistic model from forest ecology data
- Brahim Mahmoudi (ÉTS) - Specification and Detection of LLM Code Smells - ICSE 2026 New Ideas and Emerging Results (NIER)
- Houcine Abdelkader Cherief (ÉTS) - DynamicsLLM: a Dynamic Analysis-based Tool for Generating Intelligent Execution Traces Using LLMs to Detect Android Behavioural Code Smells - The 3rd ACM international conference on AI Foundation Models and Software Engineering (FORGE 2026) in ICSE 2026
11:30-12:00: Short talk by Prof. Benoit Baudry (UdeM) - Project of the creation of the Interuniversity Research Center on Interdisciplinary Software Engineering.
12:15-13:30 - On-Campus Social Event - Resto-Pub 100 Génies, ÉTS, Pavillon B, 530 Rue Peel, Montréal, QC H3C 2H1
Localisation
Salle Vidéotron (E-2033), École de Technologie Supérieure, Maison des étudiants ÉTS, Pavillon E, 1220 Rue Notre Dame O, Montréal, H3C 1K6