French AI Research: April Synthesis

🧠 Specialized Large Language Model Outperforms Neurologists at Complex Diagnosis in Blinded Case-Based Evaluation | Barrit, Sami, Nathan Torcida, Aurelien Mazeraud, Sebastien Boulogne, Jeanne Benoit, Timothée Carette, Thibault Carron, Bertil Delsaut, Eva Diab, Hugo Kermorvant, and et al. 2025. | Brain Sciences 15, no. 4: 347.

Abstract: Artificial intelligence (AI), particularly large language models (LLMs), has demonstrated versatility in various applications but faces challenges in specialized domains like neurology. This study evaluates a specialized LLM’s capability and trustworthiness in complex neurological diagnosis, comparing its performance to neurologists in simulated clinical settings. Conclusions: A specialized LLM demonstrated superior diagnostic performance compared to practicing neurologists across complex clinical challenges. This indicates that appropriately harnessed LLMs with curated knowledge bases can achieve domain-specific relevance in complex clinical disciplines, suggesting potential for AI as a time-efficient asset in clinical practice.


🧠 Gilbert Simondon's Image Theory and Human-Technology Relations through Imagination and AI Image Generation | Book: Making Media Futures | Routledge

Abstract: This chapter discusses (1) French philosopher Gilbert Simondon’s image theory by introducing his four-phase model of modes of existence of images and his idea of image evolution; and (2) applies this theory and other notions from his philosophy to the analysis of human-technology relations and the fostering of artificial intelligence (AI) literacy through culture/public discourse – thus discussing a part of what makes media futures. The argument is supported by examples of current practices of generating and sharing AI-generated synthetic images online, images produced with text-to-image generative AI applications such as MidJourney, Stable Diffusion, or DALL·E, which are shared on social media platforms like Facebook or Instagram with non-malicious intentions. These practices are interpreted as a playful engagement with technology through the lens of human-technology relations and as calling for a need for AI literacy and epistemic competence. This interpretation builds on Simondon’s image theory in Imagination and Invention and his philosophy of technology in On the Mode of Existence of Technical Objects. Overall, the contribution deals with both mental and material images, points out some human-machine differences, and discusses how living beings “host” mental images that may materialize into “image-objects.”


🧠 Enhancing Rockfall Detection Using Permanent LiDAR Scanner (PLS) Data and Automated Workflows at St. Eynard Cliff (Grenoble, France) | Manceau, L., Chanut, M.-A., Levy, C., Dewez, T., and Amitrano, D. | EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-6312

Abstract: The ANR C2R-IA (anrc2ria.fr) project aims to develop reliable decision-making tools for dynamic rockfall risk management, such as restricting access to hazardous zones during critical periods. To achieve this, we aim to develop a predictive model for observed rockfall events that relates them to weather conditions history using Artificial Intelligence tools. Training an artificial neural network requires a comprehensively labelled dataset of rockfall events. To build this dataset, we deployed various instruments, including a Permanent LiDAR Scanner (PLS), whose data is processed by an automated workflow to handle the large volume of hourly-acquired point clouds.


🧠 Assessment of the Efficiency of a ChatGPT-Based Tool, MyGenAssist, in an Industry Pharmacovigilance Department for Case Documentation: Cross-Over Study | Benaïche A, Billaut-Laden I, Randriamihaja H, Bertocchio J | J Med Internet Res 2025;27:e65651 | DOI: 10.2196/65651

Abstract: At the end of 2023, Bayer AG launched its own internal large language model (LLM), MyGenAssist, based on ChatGPT technology to overcome data privacy concerns. It may offer the possibility to decrease their harshness and save time spent on repetitive and recurrent tasks that could then be dedicated to activities with higher added value. Although there is a current worldwide reflection on whether artificial intelligence should be integrated into pharmacovigilance, medical literature does not provide enough data concerning LLMs and their daily applications in such a setting. Here, we studied how this tool could improve the case documentation process, which is a duty for authorization holders as per European and French good vigilance practices.


🧠 A Cyber-Physical Infrastructure for Smart Energy Buildings | Couraud, Benoit & Franquet, Erwin & Quinard, Honorat & Barre, Pierre-Jean & Moura, Paulo & Rozier, Yann & Dechavanne, Franck & Costini, Pierre & el Youssfi, Azeddine & Taha, Ahmad & Norbu, Sonam & Flynn, D.. (2024). | 10.1007/978-3-031-82065-6_3.

Abstract: The advancement of renewable energy and low-carbon technologies, such as electric vehicles, necessitates that smart buildings adopt innovative energy use cases to become adaptive and responsive. Additionally, the proliferation of Internet of Things (IoT) devices introduces new applications for enhancing comfort, air quality, health, and energy consumption. These evolutions require Building Automation Systems (BAS) to manage new devices and implement novel applications, which are often beyond the capabilities of current BAS technologies. Consequently, this paper proposes a Cyber-Physical Architecture that facilitates the integration of third-party IoT devices and the development of novel use cases. Specifically, the architecture supports the implementation of a Smart Energy Management System alongside standard BAS to optimize energy usage in smart buildings through IoT and artificial intelligence algorithms. The paper also presents a case study of the architecture's implementation in a smart building in Nice, France, and discusses the advantages and disadvantages of the proposed cyber-physical architecture for smart energy buildings.


🧠 Leveraging LLMs for legal terms extraction with limited annotated data | Breton, J., Billami, M.M., Chevalier, M. et al. Artif Intell Law (2025)

Abstract: The legal industry is characterized by the presence of dense and complex documents, which necessitate automatic processing methods to manage and analyse large volumes of data. Traditional methods for extracting legal information depend heavily on substantial quantities of annotated data during the training phase. However, a question arises on how to extract information effectively in contexts that do not favour the utilization of annotated data. This study investigates the application of Large Language Models (LLMs) as a transformative solution for the extraction of legal terms, presenting a novel approach to overcome the constraints associated with the need for extensive annotated datasets. Our research delved into methods such as prompt engineering and fine-tuning to enhance their performance. We evaluated and compared, to a rule-based and BERT systems, the performance of four LLMs: GPT-4, Miqu-1-70b, Mixtral-8x7b, and Mistral-7b, within the scope of limited annotated data availability. We implemented and assessed our methodologies using Luxembourg’s traffic regulations as a case study. Our findings underscore the capacity of LLMs to successfully deal with legal terms extraction, emphasizing the benefits of one-shot and zero-shot learning capabilities in reducing reliance on annotated data by reaching 0.690 F1 Score. Moreover, our study sheds light on the optimal practices for employing LLMs in the processing of legal information, offering insights into the challenges and limitations, including issues related to terms boundary extraction.


🧠 DIGITAL TWINS EMPOWERED BY THERMODYNAMICS-
INFORMED NEURAL NETWORKS |
Digital Twins in Engineering & Artificial Intelligence and Computational Methods in Applied Science

Abstract: Digital Twins have become a transformative technology in the realm of engineering and industrial applications, enabling real-time monitoring, optimization, and predictive maintenance of complex systems. The accurate simulation of physical behaviors in Digital Twins is essential for ensuring their reliability and robustness, particularly in scenarios involving complex materials. Traditional computational methods, while accurate, often entail significant processing times due to the high dimensionality and non-linearity of such systems. In this work, we present an approach leveraging Thermodynamics Informed Graph Neural Networks (TIGNNs) [1] in a local form [2] to build highly efficient and accurate digital twins of hyperelastic solids. Our method integrates the GENERIC (General Equation for Non-Equilibrium Reversible Irreversible Coupling) [3] framework, a thermodynamics-based formalism, to ensure the consistency of the learned models. By incorporating thermodynamic principles directly into the network’s architecture, we significantly enhance the physical plausibility and stability of the simulations.


🧠 Evaluating the Confidentiality of Synthetic Clinical Texts Generated by Language Models | Foucauld Estignard, Sahar Ghannay, Julien Girard-Satabin, Nicolas Hiebel, Aurélie Névéol. 23rd International Conference on Artificial Intelligence in Medicine (AIME), June 2025, Pavie, Italy.

Abstract: Large Language Models (LLMs) can be used to produce synthetic documents that mimic real documents when these are not available due to confidentiality or copyright restrictions. Herein, we investigate potential privacy breaches in automatically generated documents. We use synthetic texts generated from a pre-trained model fine-tuned on French clinical cases to evaluate potential privacy breaches according to three directions: (1) similarity between real, training corpus and synthetic corpus (2) strong correlations between clinical features in training and synthetic corpus and (3) Membership Inference Attack (MIA) using a fined tuned model on the synthetic corpus. We identify clinical feature associations that suggest strategies for filtering the training corpus that could contribute to privacy preservation. Membership attacks were not conclusive.


🧠 |Ethics of AI| as a Floating Signifier; or Towards a Politics of AI Ethics| Romele, A. (2025). Fioravante, R., Vaccaro, A. (eds) Humanism and Artificial Intelligence. SpringerBriefs in Philosophy. Springer, Cham.

Abstract: In this chapter, we question the use of the term |ethics of AI|. Our thesis is that |ethics of AI| is a floating signifier, a notion borrowed from Ernesto Laclau. The article is divided into two parts. In the first part, we discuss the need to include in the ethics of AI a reflection on the ethics of communication about AI. We also introduce the concept of the floating signifier. In the second part, we propose an analysis of the discourses mobilizing |ethics of AI| in the daily press. We examine articles mobilizing |ethics of AI| that appeared in eight daily newspapers in four European countries (France, Italy, the UK, and Germany) over a three-month period. We show the existence of three discursive uses of AI ethics: the normativity of institutions, the critique of researchers, and the techno-solutionism of companies.


🧠 Material Decomposition in Photon-Counting Computed Tomography with Diffusion Models: Comparative Study and Hybridization with Variational Regularizers | Vazia, Corentin, Thore Dassow, Alexandre Bousse, Jacques Froment, Béatrice Vedel, Franck Vermet, Alessandro Perelli, Jean-Pierre Tasu, and Dimitris Visvikis. | arXiv preprint arXiv:2503.15383 (2025)

Abstract: Photon-counting computed tomography (PCCT) enables spectral imaging and material decomposition (MD) but often suffers from low signal-to-noise ratios due to constraints like low photon counts and sparse-view settings. Traditional variational methods depend heavily on handcrafted regularizers, while AI-based approaches, particularly convolutional neural networks (CNNs), have become state-of-the-art. More recently, diffusion models (DMs) have gained prominence in generative modeling by learning distribution functions, which can serve as priors for inverse problems. This work explores DMs as regularizers for MD tasks in PCCT using diffusion posterior sampling (DPS). We evaluate three DPS-based approaches: image-domain two-step DPS (im-TDPS), projection-domain two-step DPS (proj-TDPS), and one-step DPS (ODPS). Im-TDPS first samples spectral images via DPS, then performs image-based MD; proj-TDPS applies projection-based MD before sampling material images via DPS; ODPS directly samples material images from measurement data. Results show ODPS outperforms im-TDPS and proj-TDPS, producing sharper, noise-free, and crosstalk-free images. Additionally, we propose a hybrid ODPS method integrating DM priors with variational regularizers to handle materials absent from the training dataset. This approach enhances material reconstruction quality over standard variational methods.


🧠 Artificial intelligence-based personalised rituximab treatment protocol in membranous nephropathy (iRITUX): protocol for a multicentre randomised control trial | Teisseyre M, Destere A, Cremoni M, et al | BMJ Open 2025;15:e093920. doi: 10.1136/bmjopen-2024-093920

Abstract: Membranous nephropathy is an autoimmune kidney disease and the most common cause of nephrotic syndrome in non-diabetic Caucasian adults. Rituximab is now recommended as first-line therapy for membranous nephropathy. However, Kidney Disease Improving Global Outcomes guidelines do not recommend any specific protocol. Rituximab bioavailability is reduced in patients with membranous nephropathy due to urinary drug loss. Underdosing of rituximab is associated with treatment failure. We have previously developed a machine learning algorithm to predict the risk of underdosing. We have retrospectively shown that patients with a high risk of underdosing required higher doses of rituximab to achieve remission. The aim of this prospective study is to evaluate the efficacy of algorithm-driven rituximab treatment in patients with membranous nephropathy compared to standard treatment.

🧠 Mapping the Past: Unlocking Historical Explorer Narratives with AI and Geospatial Tools | Barreau, Jean-Baptiste. 2025. Electronics 14, no. 7: 1395.

Abstract: This study explores the use of artificial intelligence and geospatial tools to analyze historical explorers’ narratives. Explorers’ accounts provide valuable insights into the cultural, environmental, and logistical dynamics of exploration journeys. However, traditional methods of analyzing these narratives are often subjective and difficult to reproduce on a large scale. The main objective is to overcome the limitations of traditional methods by using AI techniques to systematically extract and structure information from explorers’ narratives. This study employs Python scripts to extract factual data from narratives available on Project Gutenberg, followed by structuring the data in JSON format. Geographic data are enriched through geocoding using libraries such as Geopy and OpenCage. An interactive web interface based on Leaflet allows for the visualization and validation of explorers’ routes. The results show a concentration of visits in North and West Africa, with traditional modes of transport like caravans and traveling on foot being dominant. The main challenges faced were related to transportation, climatic conditions, and natural obstacles. Principal component analysis (PCA) and correspondence analysis reveal latent structures in the data, while clustering analysis segments the journeys based on similarity criteria. This research demonstrates the value of AI and geospatial tools for a more objective and detailed analysis of explorers’ narratives, opening new perspectives for historical and geographical studies.


🧠 Netlogopy: Unlocking Advanced Simulation and Integration for NetLogo Using Python | Bouaziz, Nourddine & Bettayeb, Belgacem & Sahnoun, M'hammed & Yassine, Adnan. (2025). 10.36819/SW25.019.

Abstract: NetLogo is widely recognized as one of the most popular software tools for agent-based simulation. However, it has notable limitations, particularly the lack of advanced libraries in specialized areas such as optimization, artificial intelligence (AI), and mechanical or electrical modeling. On the other hand, Python is a feature-rich programming language that is increasingly used in various research domains. This study explores the integration of NetLogo and Python to leverage the strengths of both tools. The result of this integration is the Netlogopy library, which allows direct control of NetLogo agents from Python, providing greater flexibility through Python’s ecosystem. Netlogopy is a freely available library that adds an additional layer to existing NetLogo models, enhancing simulation capabilities and making them more accessible to researchers.


🧠 Uncovering the Fairness of AI: Exploring Focal Point, Inequality Aversion, and Altruism in ChatGPT's Dictator Game Decisions | Dodivers, Eléonore and Rafaï, Ismaël, (2025), No 2025-09, GREDEG Working Papers, Groupe de REcherche en Droit, Economie, Gestion (GREDEG CNRS), Université Côte d'Azur, France, https://EconPapers.repec.org/RePEc:gre:wpaper:2025-09.

Abstract: This paper investigates Artificial Intelligence Large Language Models (AI-LLM) social preferences’ in Dictator Games. Brookins and Debacker (2024, Economics Bulletin) previously observed a tendency of ChatGPT-3.5 to give away half its endowment in a standard Dictator Game and interpreted this as an expression of fairness. We replicate their experiment and introduce a multiplicative factor on donations which varies the efficiency of the transfer. Varying transfer efficiency disentangles three donation explanations (inequality aversion, altruism, or focal point). Our results show that ChatGPT-3.5 donations should be interpreted as a focal point rather than the expression of fairness. In contrast, a more advanced version (ChatGPT-4o) made decisions that are better explained by altruistic motives than inequality aversion. Our study highlights the necessity to explore the parameter space, when designing experiments to study AI-LLM preferences.


🧠 Contesting dominant AI narratives on an industry-shaped ground: Public Discourse and Actors around AI in the French Press and Social Media (2012-2022) | Tsimpoukis, P. (2025). JCOM 24(2), A10.

Abstract: This paper studies how artificial intelligence was set to the agenda in the press and social media in France. By simultaneously analysing the framing of AI and the key actors who dominated the discourse on this technology in the national press and on the X and Facebook platforms, the study highlights, on the one hand, the influence of digital companies and government narratives, and on the other, the presence of alternative stakeholder perspectives that diverge from dominant discourses and contribute to political polarisation on AI-related issues such as facial recognition. Our study sheds light on how AI framing can highlight dominant and alternative narratives and visions and may contribute to the consolidation of socio-technical imaginaries in the French public sphere.


🧠 Coupling high resolution meteorological models with neural networks for flash flood forecasting: implementation on a Southern France basin | Gautier, S., Artigue, G., Tramblay, Y., and Johannet, A.: EGU General Assembly 2025, Vienna, Austria, 27 Apr–2 May 2025, EGU25-8432, 2025.

Abstract: Flash floods are an important hazard that particularly affects the Mediterranean region. Flood forecasting using simulation tools adapted to this context is therefore a crucial issue. In exposed regions, the difficulty of measuring and forecasting the spatial variability and intensity of rainfall, as well as the difficulty of identifying processes at the necessary time and space scales, has often led to the use of highly conceptual - or even statistical - models that make few assumptions about hydrological processes. Among these, neural networks have proven their relevance for flash flood forecasting. However, without hydrometeorological coupling, flow forecasting is often limited to the response time of the basin, i.e. a few hours in general. The purpose is to find a way of increasing this lead time, which is often too short for crisis management.


🧠 The Effects of Artificial Intelligence as a Tool in French Language Translation Studies in Port Harcourt Metropolis | Jaja, E. K. (2025). Cascades, Journal of the Department of French & International Studies, 3(1), 97–103.

Abstract: The development of artificial intelligence (AI) as a tool in French language translation has had definite impact on translation jobs. People, even professional translators, are relying on artificial intelligence in French language translation. Translation studies from the perspective of artificial intelligence (AI) are characterized by intelligence, situational factors and integration. The research fields of translation studies from the perspective of artificial intelligence mainly include the study of translation products quality and effectiveness, the study of translation processes and the study of French language translation teaching. The consequences of all these are that (AI) translation techniques and activities are competing and, to an extent, replacing human translation.