Helen, vous avez un parcours mêlant droit, finance internationale et projets de recherche en gouvernance de l’IA, entre cabinet de conseil (Morgan & Colney), startups et grands groupes : comment cette trajectoire hybride a-t-elle façonné votre vision très concrète de la gouvernance de l’IA ?
My background across law, investment and AI gives me a structural view of governance. I think of AI Governance like architecture: you don't start with the paint, you start with the foundations, the load-bearing structure. AI Governance is the operating system for trust. And because I've worked across consulting firms, startups and large groups, I can translate between legal, technical, and business teams in a way that makes AI governance actually works in operations. What my hybrid trajectory taught me is that the failure point is almost never the regulation itself. It's the gap between what technologists build and what decision-makers understand. I sit in that gap.
Vous conseillez des clients sur des stratégies d’IA dans un contexte de croissance et d’expansion transfrontalière : pouvez-vous décrire, à partir d’un cas concret (sans le nommer), comment vous mettez en place un cadre de gouvernance de l’IA qui tienne compte de régulations multiples (UE, Inde, Asie du Sud-Est, etc.) ?
When I build cross-border AI governance, I start with one global core (example: risk assessment, documentation, human oversight) and then add regional overlays for the EU, India, Southeast Asia, depending on the clients' needs. Then I look at practical tools like risk registers, impact assessments, vendor checks, data-flow protocols, etc. My advise is not to build three separate compliance stacks. Instead, build one solid core and layer jurisdictions on top. Another practical challenge is always change management. The person responsible for AI governance within the organisation often has no map and no mandate. I also help them navigate stakeholders, build buy-in, create momentum. When it works, a client can respond to two different regulators in two different jurisdictions within days instead of months.
En tant qu’ancienne responsable juridique pour l’Inde et l’Asie du Sud-Est chez OYO, quels écarts majeurs avez-vous observés entre les approches de la régulation et de la gouvernance de l’IA en Asie et en Europe, et comment ces différences influencent-elles aujourd’hui vos recommandations stratégiques aux clients ?
For me, Europe is prescriptive and penalty-driven. The EU AI Act is essentially a product safety law applied to software that learns and evolves, of course with clear obligations and timelines. But Asia is very different. It is more fragmented, more innovation-first and often sector-specific. There's often this wrong assumption that AI governance is easier. It's that the complexity is different. You cannot copy-paste a European model. What I tell clients is to design for flexibility and treat Asia not as a compliance-light zone but as a first-mover opportunity. Being ahead of local regulation builds genuine trust with regulators and enterprise clients.
Vous intervenez aussi comme mentor de startups chez Tech Forward et analyste d’investissement chez Plug and Play : quelles erreurs ou angles morts revenez-vous le plus souvent corriger chez les jeunes entreprises technologiques lorsqu’il s’agit d’intégrer la gouvernance de l’IA dès la conception de leurs produits et de leurs modèles d’affaires ?
Startups almost always treat AI governance as something to worry about after Series A. Often, no one owns AI risk and it typically falls between legal, product and data teams and then, everyone assumes someone else is handling it. Data provenance isn't documented. Audit trails and model cards are ignored. Everything is MVP and GTM. Personally, I push founders to start with three things - a one-page risk register, a data lineage map and a clear human-in-the-loop policy. That's it. These basics prevent enormous problems later and they signal maturity to enterprise clients and investors far earlier than founders expect. I'm also building structured programs to help teams develop AI governance fluency from the inside, because external consultants can't be the long-term answer.
Du point de vue très opérationnel de la conformité et de la gestion des risques, comment articulez-vous gouvernance de l’IA, responsabilité juridique et performance business ? Autrement dit, comment convaincre un comité de direction que la gouvernance de l’IA est un levier de compétitivité plutôt qu’un centre de coûts supplémentaire ?
I frame AI governance as reducing the cost of trust and that reframe changes the entire conversation with a management committee. Enterprise clients now request AI governance documentation before signing contracts. Cyber insurers are actively pricing AI risk. And one governance failure (example, a data incident, a biased model decision, a regulatory fine, etc) costs far more than building the system properly from the start. The metric I use is time-to-trust: how quickly can you explain how your AI makes decisions to a client, a regulator or your board? If that answer takes six weeks, you don't have a compliance problem. You have a competitiveness problem. AI governance-ready companies close deals faster, attract institutional capital and retain enterprise clients. That's the business case.
Entre l’AI Act européen, les cadres émergents en Inde ou à Singapour, et les initiatives volontaires des grandes plateformes, à quoi pourrait ressembler, selon vous, une architecture de gouvernance de l’IA véritablement transfrontalière dans 5 à 10 ans, et quelles en seraient les conséquences pour les entreprises moyennes qui veulent s’internationaliser ?
We're heading toward a patchwork of regional frameworks with interoperability bridges, it's very similar to what happened with privacy law. What I think is that the EU will set the global baseline through the Brussels Effect. ASEAN frameworks will mature. India will emerge as a genuine third pole. And mutual recognition agreements between regulatory sandboxes (for example, Singapore is already piloting this) will become the real unlock. For mid-sized companies that want to internationalise, AI governance stops being a compliance question and becomes a market-access question. The companies documenting AI governance now will move faster when those bridges open, win public procurement and attract institutional capital. I see that this advantage is very real and that it compounds. On the other hand, those waiting for a final global framework will be permanently reactive.
Pour conclure, quel conseil très pragmatique donneriez-vous aux dirigeant·e·s et juristes qui nous lisent et qui n’ont pas encore structuré de gouvernance de l’IA : par quoi commencer demain matin, avec très peu de moyens mais une vraie volonté d’agir ?
Don't start with the regulation, start with your AI inventory. List every system your organisation uses or builds, assign one named owner and run a quick risk triage. Pick one high-risk use case and govern it properly, then replicate the template. You don't need a large team — you need a clear process and a written record. For lawyers specifically: stop waiting for the rules to settle. Your value is operationalising principles before the final text exists, that's precisely when clients need you most. AI literacy is the real differentiator now. This is exactly what I'm building for lawyers now — a structured, practical program to develop that fluency from the inside out.
Pour en savoir plus : https://www.morganandcolney.com