Nadio, as someone deeply involved in the democratization of responsible AI, could you share a specific experience that cemented your commitment to this mission?
Thanks for the question. There have been a few defining moments, but if I had to pinpoint one that truly cemented my commitment to the mission of democratising responsible AI, it would be the period following my cancer diagnosis.
That time gave me pause—and perspective. Facing stage 3 cancer made me reflect deeply on my purpose, not just professionally but personally. It wasn’t just about leaving a legacy, but about ensuring the work I was doing had real, tangible impact for the people and communities who might otherwise be left behind by this AI revolution. That’s when I truly accelerated my mission—to make AI accessible, responsible, and inclusive.
I co-founded The Age of Human Think Tank around that time, which helped reframe my thinking. It pushed me to advocate not only for ethical frameworks but for grassroot-led AI literacy. I also launched initiatives like the AI Driving Test—a kind of national AI literacy qualification—and proposed ideas such as Digital Mayors, all designed to put people, not just tech, at the heart of AI transformation.
In short, my commitment to the mission was forged in challenge, and it’s driven by urgency—to bridge the digital divide with human-first solutions that empower, not replace.
With your extensive experience in creating 100 GPT models in 100 days, what were some of the unexpected challenges and insights you gained from this ambitious project?
The Unexpected Challenges?
First and foremost—mental stamina. It’s one thing to be creative, it’s another to be consistently creative every day for over three months. It really tested my discipline, not just as a business owner, consultant, academic or strategist, but as a storyteller. I had to make each GPT distinct, useful, and often fun. The other challenge was wrestling with AI guardrails. Some ideas I had were perfectly innocent but got flagged as sensitive. So I had to become a bit of a prompt engineer and a diplomat.
There were also surprising technical constraints. You’d think GPTs could do anything, but the reality is they have quirks. Things like contextual drift, token limits, or even formatting inconsistencies meant I had to test and tweak… a lot more than expected.
The Unexpected Insights?
Oh, loads. But one stands out: most people don’t know what they need until they see it in action. I’d make a GPT thinking it was niche (like one for estate agents or for safeguarding concerns), and boom—suddenly it’s the most requested or talked about. It taught me that we’re still at the infancy of AI adoption, and showing is far more effective than telling.
Also, the emotional feedback was incredible. People messaged me saying my GPT helped them with their child’s education, their mental health, or their community project. That really grounded me—it’s not just tech. It’s transformation. 💥
In your view, how can strategic marketing enhance the public’s perception and understanding of AI, especially in promoting ethical and responsible AI usage?
Strategic marketing isn’t just about selling stuff—it’s about shaping understanding and building trust.
When it comes to AI, we’re facing a massive perception gap. Most people either think AI is going to steal their jobs or turn into Skynet. So here’s the thing: we need to rebrand AI. Not as a threat, but as a tool for good. And that’s where strategic marketing steps in.
It starts with storytelling. Not science fiction, but human-centric stories that show real-world impact. For example, how AI can reduce NHS waiting lists, support neurodiverse students, or help farmers predict crop yields. These are the kinds of stories that cut through the noise and connect.
Next up: transparency. Strategic marketing must help demystify AI—no black boxes or jargon. Clear, digestible comms that explain how AI works, where it’s being used, and—crucially—what data it's using. Think of it like nutritional labels, but for algorithms.
And then there's community engagement. You can’t market responsible AI from a podium. It needs co-creation, grassroots input, and feedback loops. I use this approach in my initiatives like the AI Driving Test and Digital Mayors—because people are more likely to trust tech they’ve had a say in shaping.
Finally: consistency and credibility. If we want people to trust AI, we need to model that in our campaigns. Ethical marketers must walk the walk—showing that AI isn’t just powerful, but purposeful.
You’ve been a pivotal figure in establishing collaborative communities focused on AI. Can you describe how these communities have contributed to the broader conversation on AI responsibility and innovation?
Yes, I’ve always believed that AI should not exist in a vacuum—it’s a process of collaboration. That belief drove me to found the AI Collective after i co-found The Age of Human Think Tank—two platforms that exist not to control the AI conversation, but to democratise it. And the result? Incredible.
These communities have become safe, inclusive spaces where school leavers, startup founders, policy-makers, and professors all come together to explore AI without jargon, judgement, or hype. That alone shifts the narrative—from fear to curiosity, from control to co-creation.
Here’s what we’ve achieved:
AI Literacy at the Grassroots: By creating GPTs for everyday use (like "My Safeguarding Buddy" or "My LinkedIn AI Assistant"), we’ve enabled people to use AI safely before they even fully understand it. It’s learning by doing—but responsibly.
Ethical Leadership Incubation: Through initiatives like the proposed AI Driving Test and National AI Leadership Programme, we’re identifying and mentoring future leaders who will shape AI policy from a values-first perspective.
Policy Influence: Our Open Letter response to the UK Government’s AI Opportunities Action Plan was a direct result of community dialogue. It wasn’t just my voice—it was a collective push for digital mayors, AI literacy hubs in post offices, and AI in lifelong learning.
Hybrid Innovation: We’re now seeing cross-pollination—where people from different sectors are teaming up. For instance, an academic teamed up with a local authority to co-develop an AI curriculum. Or a charity collaborating with a coder to support mental health interventions. That wouldn’t happen without these communities.
At its core, these communities foster Hybrid Intelligence—where humans and machines co-evolve. And more importantly, where humans support each other in shaping the role AI should play in our shared future.
As a part-time educator in innovation management, how do you incorporate the latest AI developments into your curriculum, and what impact do you see it having on your students?
As a Visiting Lecturer, I see it as my duty—not just a nice-to-have—to weave the latest AI developments into the curriculum. The world isn’t standing still, so why should our lectures?
So how do I do it? Three key ways:
Live, Working AI Examples:
I don’t just talk about AI theory—I show it. I bring in some of the custom GPTs I’ve created (like “My Academic Mentor” or “Hybrid Intelligence Copilot”) right into the session. Students are often stunned to see AI not only summarising reports or generating ideas, but doing so ethically and contextually. It sparks curiosity, fast.
Co-Creation of AI Projects:
I actively involve students in the design and testing of new GPT models. Some have gone on to prototype AI assistants for mental health, sustainable business, or event planning. It’s a powerful lesson in agency—they’re not passive recipients, they’re shapers of AI.
Discussion on Ethics & Bias:
We dive deep into what it means to use AI responsibly. I pose questions like, “Would you let this model decide your hiring shortlist?” or “How do you handle transparency in customer comms if you’re using AI copywriting tools?” These lead to some of the most animated debates I’ve ever had in a classroom!
The Impact?
Genuinely transformative. Many students come in nervous or sceptical about AI. By the end, they’re leading dissertation topics on it, applying AI tools in their part-time jobs, or—my personal favourite—going home and teaching their parents how to use it.
It turns AI from a threat into an opportunity. From abstract to actionable. And that’s what education should be.
Given your role in founding various influential initiatives like AI Curious and Age of Human, how do you envision these platforms evolving to address the future challenges of AI?
To me, platforms like https://nadio.ai, https://ai each party.co.uk and The Age of Human are living organisms—not static entities. They’re shaped by the conversations, collaborations, and challenges of the communities that use them. And with the way AI is evolving, these platforms have to stay fluid, inclusive, and responsive.
So here’s where I see them going:
🔮 1. From Awareness to Action
We started out demystifying AI—helping people get curious without being overwhelmed. But now? We’re moving from “what is AI?” to “how do I lead with AI?” That’s why I’m developing initiatives like:
The AI Driving Test – a foundational literacy standard.
Digital Mayors – a grassroots governance model.
National AI Leadership Programmes – to nurture ethical AI leaders across sectors.
🌍 2. Hyper-Local meets Global
AI Curious is already global—members span 20+ countries. But the next evolution is hyper-local deployment. Imagine every borough, town or village having access to tailored GPTs that speak their language, solve their problems, and reflect their culture.
Think:
Localised AI learning hubs in libraries and post offices.
Region-specific GPTs co-designed with local communities, not just engineers.
Community-funded AI projects for SMEs, schools, and charities.
🧠 3. Deepening Hybrid Intelligence
At The Age of Human, we’re exploring Hybrid Intelligence—the collaboration between human intuition and machine computation. Future iterations will focus on:
Multi-stakeholder think tanks exploring ethics, agency, and accountability.
Collaborative research on post-GenAI workplace transformation.
AI-assisted decision-making frameworks for governance and education.
🚀 4. Platform as Practice, not just Content
Both platforms will become more experiential. Less about “joining a group”, more about “doing together.” So expect:
Micro-accreditations through live co-piloting experiences.
GPT sandboxes for users to test, fail, iterate and share.
Live hybrid summits (like The AI Collective Summit in the UK) to bring real faces into digital spaces.
What do you believe are the key elements that organizations should focus on to successfully integrate and leverage AI in their business models, according to your experiences in both a practical and educational context?
Spot on. This is the question every boardroom, classroom, and coffee break should be tackling right now.
In both my consultancy work with global firms and my teaching role at Roehampton, I’ve seen the same challenge pop up repeatedly: organisations want AI, but they don't know what they’re actually buying into.
So here’s what I believe are the non-negotiables—the 5 key elements that must be in place for AI to land well and stick:
🧭 1. Purpose Before Platform
Start by asking why you're using AI—not just what tool you’re using. Are you solving a customer pain point? Speeding up admin? Enhancing creativity? Tech for tech’s sake is expensive and often damaging. Define your north star first.
🧠 2. AI Literacy Across All Levels
I’m not just talking coders and data scientists. Your marketing team, HR staff, and even your receptionist should understand what AI is (and isn’t). This is why I advocate for initiatives like the AI Driving Test—a practical, role-based understanding of how to use AI ethically and effectively.
🛠 3. Hybrid Teams = Hybrid Intelligence
One of the best outcomes I’ve seen was when a retail company paired data scientists with floor staff to co-design an AI-powered inventory tool. That’s Hybrid Intelligence: human insight meets machine capability. Diverse teams = better AI outcomes.
🧾 4. Ethics by Design
Ethical AI isn’t a plug-in. It has to be built into the culture. That means:
Transparent data use policies
Bias audits
Explainability dashboards
Stakeholder feedback loops
Make ethics a feature, not an afterthought.
📈 5. Scalable Small Wins
I always advise starting with micro-projects. Test, iterate, and expand. That could be:
An AI-powered customer chatbot
Personalised email marketing segmentation
Predictive HR recruitment tools
Learn from these and scale up with confidence and context.
Factoid: A recent McKinsey report found that organisations with AI education embedded across departments—not just tech teams—saw 30% higher ROI on AI adoption compared to those who centralised it within IT alone. That’s huge.
Nadio Granata est le fondateur de The AI Collective, une initiative visant à démocratiser l'accès à une intelligence artificielle responsable. En tant que CMAIO et perturbateur positif, il utilise ses compétences en marketing stratégique pour bâtir des communautés collaboratives. Nadio enseigne à temps partiel dans des universités et a conçu des programmes de formation sur le marketing AI. Il a fondé plusieurs organisations, dont The Independent Consultancy Exchange et Age of Human. Il est également auteur et influenceur, engagé dans des projets de collecte de fonds et de sensibilisation à l'IA.