For those who frequent social networks, it sometimes happens that users announce their departure from these platforms in a very solemn manner – always definitive and without appeal, of course. A virtual translation of slamming the door or hanging up on someone, the regulars of this practice intend to stage their distancing from these networks, sometimes even drawing up lists of grievances accumulated throughout their digital life. Supreme proof of recognition of our era: “memes” have enshrined these theatrical practices, reminding us, for example, that social networks are neither a train station nor an airport and that there is no point in announcing one’s departure (or arrival for that matter).
It is therefore aware of the shortcomings of this practice that I hereby announce, with great humility and sincere gratitude to all my colleagues, friends and relations, the end of my activities related to the study of the impact of digital transformation in September 2021.
Let’s be clear about this approach and my expectations in giving in to it: I obviously don’t expect any tributes (I’m not yet six feet under), nor vibrant praise (if some were ever tempted); even less regrets or encouragement to come back later. Everything is fine with me! This decision is the consequence of a nice promotion leading me to brand new responsibilities of an administrative and budgetary nature within the intergovernmental organisation where I am employed, a promotion that I wanted and desired. But this turning point, carefully prepared and constructed, also owes – I must admit – to the impasses, substantial questioning and introspection concerning the meaning of my contribution in the current debates on digital transformation and artificial intelligence (AI).
I have been interested in theorising on these subjects for a few years. Since 2018 I have been lucky enough to make it a full-time activity, mixing work on regulation projects at the Council of Europe, research work at the IHEJ (Institut des Hautes Études sur la Justice) and teaching at the University of Strasbourg (masters 2 cyberjustice, digital economy law and e-commerce). Not to mention my investment in Amicus Radio with the podcast “Les Temps Électriques” and my involvement in think-tanks, such as the Sapiens Institute. The publication of a book was the culmination of this dynamic, bringing together in detail most of the analyses I had produced on the subject and proposing my views on the content of an international treaty on AI. As a result of a long process of collation, juxtaposition and putting into perspective, I came out of this exercise with some clear convictions: the need, first of all, for binding regulations to frame the development of AI; the need, secondly, to never stop specifying and clarifying the various notions and concepts mobilised in this field in order to feed quality debates, so as to avoid fuelling too many confusions and persistent misunderstandings; Finally, there is a clear need to slow down in order to (re)take the time to agree on the objectives and the meaning of the world we wish to leave to our children, while the market is racing to take advantage of this transformation as quickly as possible and multidimensional competition with uncertain consequences is polarising public policies.
I confess that I have a lot of difficulty in forming an idea of the impact of these reflections. It has to be said that the relationship between the production of a discourse and the effects on the receivers is never easy to decipher. It is also necessary to note a certain saturation of expert discourse in the public arena, and I am aware that I have contributed to this… but let us try to remain objective.
On the one hand, I have accompanied a movement well underway in Europe to regulate digital technologies and AI: the European Commission published a draft regulation in April 2021 and the Council of Europe is in the process of formalising its own. My “plea for regulation” has therefore reflected this reality.
On the other hand, I am aware that some of the key principles of my book are not heard today. For example, if the consequences of the use of algorithms on individuals and society seem sufficiently significant, it might be conceivable to simply limit, or even ban, the use of these technologies in order to continue to prefer other, very human, mechanisms framed by solid procedural guarantees. This proportionality in the use of algorithms seems, of course, somewhat audacious, tinged with an unawareness of the great mechanisms of ‘progress’ that are at work. But I still think that the emergence of a “Rule of algorithms“, replacing the rule of law, and that the environmental and societal impacts of the transformation of our world into data, processed by a vast and – still – heterogeneous system of algorithms, can only be limited by leaving some blind spots. The consensus that has emerged in recent years on digital technology takes for granted the benefits of this transformation (who would oppose innovation?) and none of the current regulatory projects has thoroughly investigated this principle of proportionality, which is certainly absurd for most experts and public decision-makers. Instead, they prefer to accompany the acceleration by essentially seeking ways to create trust in the operation of these machines, through a clever mix of security mechanisms (such as certification – which I have myself supported on many occasions), non-binding mechanisms (including ethics – which I have often criticised) and mechanisms based on fundamental rights.
So yes, there is certainly a supreme form of elegance in continuing to engage vigorously with the headwinds of consensus or in stubbornly overcoming the contradictions of this world, including one’s own first. But, to quote Pierre Corcuff (himself quoting Maurice Merleau Ponty), “to be on its time is to be caught up in something still confused, composite, ambiguous, which sticks to our skin and from which a partial distance can only be taken, so much so that our adherences to its obviousness remain prevalent”. And I believe that this is the most substantial reason why I do not feel sufficiently armed today to continue to go to the front and participate in publicly deconstructing, with extreme rigour, the plural components of this digital transformation.
Kate Crawford’s “Atlas of AI” does a much better job of capturing the substance, while admitting its own limitations. Her attempt at a systemic approach, mapping the entire process from the extraction of the minerals to create the hardware, to its exploitation using vast amounts of energy, to its abandonment in open dumps, is certainly the first essential and foundational brick in any critical approach. This vision of the system we are composing seemed to me to be as striking as a spaceman seized for the first time by the rotundity of the Earth.
Explaining that mathematical and statistical formalisms do not necessarily contribute to more neutral and objective decision-making in environments that are difficult to model, such as the functioning of our society, is probably the second brick of this critical thinking that authors such as Cathy O’Neil or Pablo Jensen (to name but a few) have brilliantly seized upon.
I myself have tried to deal with this issue with a great deal of humility from the perspective of the digital transformation of justice, but I am aware that I have only been able to take a partial distance, which is not enough to convince my peers. It was therefore not enough to bring into the public debate the concrete elements demonstrating the obvious counter-intuitive nature of the introduction of algorithms that statistically process case-law in order to forecast legal decisions. Quite regularly, my speech has been placed at one end of the pendulum, with the bold innovations of entrepreneurs at the other, in an attempt to deduce a middle way. If one can perceive in this a form of prudence or deformation of judicial practice seeking to balance the contradictory arguments in presence, this precise subject, because of its profound impact on the very essence of the jurisdictional exercise, deserves in my opinion to enter into the complexity of things in order to durably invalidate the prescriptive/predictive/quantitative/actuarial use. So yes, the use of the descriptive capacities of these algorithms on jurisprudence, in addition to other scientific protocols to shed light on its meaning, obviously deserves an investment in research (by identifying precisely what is being measured – and not necessarily the decision itself, but the regularity of its formalism). However, we need to wake up and stop reactivating, with the veneer of AI, rather old fantasies that law can be mathematised. No fundamental revolution has occurred in this area since the attempts of Leibniz or Condorcet, and the limits of the various systems of deontic logic are well known.
However, the end of my activities related to the study of the impact of digital transformation is not the end of the road. Modestly, I know that the paths I have taken are already being trodden by others: I confess that It would be hard for me not to keep an eye on them, hoping to see a constructive critical spirit on digital technologies develop widely. Moreover, my own career path has been marked by many alternations of subjects and disciplines, in order to keep my mind sharp and never let myself be locked into the comfort of my own certainties. Perhaps I had reached the point of having too much on AI.
In the meantime, all that remains is for me to roll up my sleeves and invest myself in my new duties by putting into practice, where necessary, the proper use of algorithmic systems, in the fully operational context of the administration of an intergovernmental organisation. Less than glossing over or speculating, confronting the roughness of reality with humility will be just as instructive.
I forgot: don’t be surprised to see my latest thoughts on AI regulation running into next year. I have some texts that are still in the process of being published – so no ‘false starts’.
With my sincere and heartfelt thanks for having followed me during these years!
Animateur des Temps Electriques et auteur du l’ouvrage “L’intelligence artificielle en procès”
Les opinions exprimées n’engagent que son auteur et ne reflètent aucune position officielle du Conseil de l’Europe
 Y. Meneceur, L’intelligence artificielle en procès : Plaidoyer pour une réglementation internationale et européenne, Bruylant, 2020
 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence legislation) and amending certain Union legislation – COM(2021) 206 final – https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0020.02/DOC_1&format=PDF
 P. Corcuff, La grande confusion : Comment l’extrême droite gagne la bataille des idées, Textuel, 2021, p.46
 K. Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021
 C. O’Neil, Weapons of Maths Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, 2016
 P. Jensen, Why society cannot be equated, Seuil, 2018
 For the sake of brevity, I will not elaborate here on the instrumentalizations or counter-interpretations of my conclusions on the subject, with some speakers citing the nuances of my writings to legitimize the use of these techniques.