Dave: B Corp companies want to make a positive impact through business. AI could supercharge that. It can analyse complex social systems, spot inequality and help companies come up with things - solutions - that benefit people and the planet. It could also turn impact measurement, instead of looking back, into a real-time tool - to help companies that are driven by purpose to adapt and respond quickly. How do you think AI technologies can fit into - or maybe even challenge - the social and environmental values that B Corps stand for?

Justin: AI in the workplace presents a balance of opportunities and challenges. On one hand, it has the potential to significantly enhance inclusivity and diversity, offering tools and insights that enable better environments. On the other hand, important concerns remain, such as the energy demands of AI systems, potential biases embedded in algorithms, and the ethical implications of data usage during training.

Nevertheless, and despite the challenges, there’s no doubt AI is transforming how organisations and employees work. By providing more intuitive access to data, and taking over repetitive and time-consuming tasks, it frees you up to focus on the more creative and strategic aspects of your role - allowing innovative potential to cut through. This dual nature of AI - its promise versus its pitfalls - means a thoughtful approach to its integration in the workplace is needed.

2.jpg

From a B Corp perspective, one of the most interesting opportunities lies in how AI can be more inclusive. It provides better access to tools that make it easier for people to excel at their jobs.

As you know, last month I attended the British Film Institute (BFI) AI Creative Summit which, as you’d expect, was focused on the film industry. One of the key take-aways was the striking shift in how work is being done. Tasks that used to need people to be physically present on film sets or in crowded editing studios can now be done remotely. With the help of advanced editing tools and AI automation taking care of the more repetitive, time-consuming work, you can get high-quality results from home.

Then there’s the progress in accessibility. Tools like text-to-speech and speech-to-text not only simplify workflows, but they’re making creative industries more inclusive, for example for people with disabilities. These innovations remove barriers, allowing a more diverse range of talents into the filmmaking process. It’s real progress, and it’s a trend we’re likely to see expand well beyond the film industry into other sectors. The potential for AI to redefine how, where, and who can contribute creatively is one to watch.

For an agency like ours, AI has a slightly different focus. It’s about improving access to information, and enhancing our ability to monitor and report on various aspects of our operations. A big part of this involves showing that we have strong governance in place, or that we’re making strides toward it. AI helps us collect and analyse data more efficiently, whether it’s about our environmental impact, social contributions or internal workplace metrics.

3.jpg

Dave: And also more generally analyse complex social systems, spot inequality, and help companies come up with things - solutions - that benefit people and the planet.

Justin: Yes, all that and much better. Over time, we’re seeing better tools for tracking carbon footprints, especially for larger companies. AI is now being used to analyse entire supply chains and verify sustainability credentials, which is a huge help. Right now, this kind of work - manually checking everything - is incredibly time-consuming, and typically only done once a year. Ideally it should happen quarterly, with the insights feeding into corporate governance to help set clear objectives and measurable results.

For businesses, this is a win-win. It helps employees manage this data more efficiently, while enabling companies to spot trends and respond proactively. However, there’s a challenge here: AI itself isn’t entirely efficient or transparent yet.

For instance, its own carbon footprint is often unclear. We don’t always know where the data used to train these models comes from, or if it was ethically sourced. And many models haven’t been rigorously tested for bias, which means their outputs can still reflect systemic issues. That’s becoming more widely recognised, and while these models have already improved significantly over the past couple of years, there’s still work to be done.

As AI becomes more energy-efficient, its rapid expansion is likely to still drive total energy use higher. So the challenge lies in pairing this growth with clean energy solutions and actually using AI itself to advance technologies, like battery storage and carbon capture. Equally important is making sure AI training data is ethically sourced, bias-free and well-structured, to ensure fair, accurate, and also inclusive outputs. If we're able to align AI development with sustainable practices, I think the industry can address environmental and ethical concerns - and still drive forward with innovation.

If we can address these challenges, AI could align closely with what B Corps stand for: transparency, accountability, and working toward the betterment of everyone in the business ecosystem - employees, customers, and business owners alike, making a positive social impact.

4.jpg

Dave: There does seem to be a rapidly growing number of companies using AI in a way that doesn’t just focus on performance but actually does make that positive impact.

Justin: Yeah, definitely. For example, AlphaFold is revolutionising science by using AI for DNA sequencing and protein structure prediction, which has incredible implications for health and research.

In healthcare and biotech, AI is being used to tackle complex problems, from drug discovery to personalised medicine. Meanwhile, NGOs are applying AI in innovative ways in parts of Africa, particularly in agritech, to improve farming practices - and in financial services to provide access to banking for under-served communities.

They're even using AI to expand access to legal support. One standout example is a Swiss NGO called AsyLex that’s developed an AI system offering free legal documentation and advice for asylum seekers across Europe. This system is helping to speed up the often gruelling processes of asylum claim reviews, background checks and identity verification - tasks that are both essential and incredibly challenging. It’s a great demonstration of how AI can be used to address humanitarian challenges.

Of course, while these applications are exciting, they need to be rigorously monitored for biases to ensure fairness and reliability. But overall, it’s clear that AI has enormous potential to make a real difference in these areas.

Dave: When we’re helping clients build their AI expertise, what skills and ethical ideas are we making sure to include in our team training?

Justin: One of the key areas we emphasise is data governance. The first question to ask is whether they have the right to use the data they’re inputting into a large language model. Have they obtained proper permissions? Do they actually own the data? Can they clearly demonstrate what data has been used and what hasn’t in the outputs generated by AI? Even seemingly simple examples, like using AI to create Christmassy versions of staff photos, could lead to problems if team members haven’t given their consent. Being vigilant about data governance is something we stress to everyone.

5.jpg

Another critical aspect is educating teams about the practical and legal considerations of using AI-driven tools, like Copilot or other generative design and development applications. For instance, copyright issues can arise when using code-generation tools. If AI generates code for a client project, that code technically doesn’t have an owner. Yet, if a software contract promises the client ownership of the code’s copyright, you’re in a legal grey area because the AI-generated code doesn’t convey that ownership. This isn’t just an issue for software; it applies broadly - like a journalist using AI to write an article and selling the copyright to a publication. In such cases, neither the journalist nor the publication actually holds the copyright. These legal nuances are complex, still evolving, and not widely understood. Even though frameworks like the EU’s AI policy exist, very few people have read or fully grasped them. This knowledge gap is something that urgently needs attention as AI adoption grows rapidly.

We also make sure teams understand the risks and limitations of AI. While these tools can massively boost productivity, they come with significant challenges. For example, when using AI to generate a thousand-page document, how deeply have you reviewed and understood it? Did you just skim it, tweak a few words and send it off? Can you confidently say the content is accurate, unbiased and reliable? The same concerns apply to AI-generated code - how do you ensure it’s free from critical bugs? And in design, how do you verify that the output doesn’t unintentionally include copyrighted or derivative work?

A major challenge is that many people don’t fully understand the provenance of AI-generated outputs or the legal implications of using them. As AI becomes an integral part of larger projects, it’s crucial to ensure not only the validity of the results, but also their ethical and legal soundness.

Dave: How can businesses help employees not just get good at using AI, but also understand how to use it responsibly? For example, they could frame AI training as an ethical discovery journey - combining technical skill-building with exploring the profound human implications of these technologies. The goal would be to encourage employees to become mindful and conscientious tech users, viewing AI not just as a tool but as a partner in collaborative intelligence, with wisdom, empathy and critical thinking at the forefront.

Justin: Honestly, I don’t think many businesses are doing enough in this area. In fact, a lot of companies have strict policies banning AI use at work, but the reality is employees are still using it - it’s difficult to enforce. What’s really needed is for businesses to create a clear assessment framework that helps employees navigate AI use responsibly.

This framework should guide them through evaluating the potential impact and value of using AI. For example, they need to consider ethical questions, like the origin of the data they’re using, and the effect it might have on their colleagues or workflows. The value side is equally important - yes, AI can boost efficiency, and that’s often the main focus right now. But we also need to think critically about risks, like de-skilling employees, or diminishing the quality of products or services.

By tying ethical considerations directly to practical outcomes, companies can help employees become not just skilled AI users, but thoughtful and responsible ones. It’s about balancing efficiency with integrity.

6.jpg

Dave: I’ve been thinking of AI as the brain for businesses - using real-time data to save energy, streamline supply chains and track carbon emissions so companies can actively reduce them. B Corps are all about running in a sustainable way. From our perspective, how can AI help companies lower their carbon footprint, create greener workflows, or boost sustainability efforts?

Justin: AI does have the potential to drive sustainability in impressive ways, though there are challenges. Take the construction industry as an example: inefficiencies in supply chains and the lack of transparency around materials are big issues. It’s often hard to prove whether materials come from sustainable sources because they pass through multiple countries and still rely heavily on paper processes. AI-enabled software can now analyse satellite imagery and material sampling results efficiently and accurately enough to help verify the origins and transit of materials through supply chains - confirming that wood, for instance, was sourced from a sustainable area, or that minerals were ethically mined. AI could also track shipments to ensure they follow green practices along the supply chain.

Digital industries face a different challenge: inefficient data hosting and storage systems that consume unnecessary energy. Startups are now using AI to optimise server usage - essentially turning resources on and off as needed, and sharing server loads more efficiently. This can significantly cut electricity use and reduce CO2 emissions compared to running dedicated data centres full-time.

There’s still a lack of transparency in the AI supply chain itself. For instance, how much energy does it take to train a model or process each inference request? What’s the CO2 footprint of using these tools? Without clarity on those metrics, it’s hard to fully assess the environmental trade-offs.

That said, there are clear wins in sectors like agritech. AI-guided tools, like seed injection systems, virtually managed vertical farms and hyperlocal climate prediction reduce the need for traditional tilling, slashing CO2 emissions. Similarly, in disaster rescue, AI-driven drones are replacing helicopters for fighting fires, reducing fuel consumption significantly.

The problem is, while these use cases show promise, we don’t yet have the full picture of their broader environmental costs - like the carbon footprint of building those drones. With better transparency and more precise measurements across supply chains, I think AI’s role in sustainability will only become more impactful. For now, it’s a mix of exciting potential and areas needing improvement.

Dave: How can we encourage clients to think beyond just efficiency and productivity when it comes to AI - focusing instead on its impact on social values, employee wellbeing, and aligning with their broader goals?

Justin: A great starting point is to look at established frameworks that address these broader considerations. For example, the World Economic Forum's Prism framework is one of the most recognised. Over the past year, it’s been tested and adopted by various social innovation NGOs and companies. The framework emphasizes critical aspects like data governance, provenance and the importance of having a clear AI policy.

It provides practical guidelines for evaluating AI usage - asking questions like: Is it ethical? Is it fair? Does it deliver value? And what’s the cost of delivering that value? These align closely with principles that resonate with B Corp values. In fact, there’s an opportunity here for the B Corp movement itself to incorporate AI governance and best practices into its framework, making sure businesses are not just efficient but also responsible and values-driven in their approach to AI.

Dave: They must be doing that...

Justin: Yeah, I haven't delved into it, but I would have thought they must be discussing it and bringing it in. There’s also an opportunity to adapt the Prism framework into a practical tool - something businesses could use to evaluate their practices. Imagine plugging in data and being able to confirm, step by step, that you're aligned with ethical and social targets.

7.jpg

Dave: So, would you say we as an agency are aiming for a Prism-like approach to measure broader, more meaningful outcomes?

Justin: We haven’t formalised it into a tool we use consistently across projects. But we absolutely should. Even something as simple as a spreadsheet could help us track key factors for every AI project. For example, we’ll document where the data comes from, identify risks, outline mitigations for biases or provenance issues, and consider ethical and social impacts. At the same time, we’ll evaluate the value generated by the project.

Over time, we can refine this process into a decision-making framework. Picture a system where we score each project step - on a scale of 1 to 10 - at the outset. If a project doesn’t score at least a 6 out of 10 against our framework, we’d probably decide not to move forward with it.

Dave: Or we’d go back to the client…

Justin: Exactly. We’d say, “To move ahead, you need to improve your data, or perhaps add more human-in-the-loop oversight to ensure you’re not disenfranchising your team.” This would turn the framework into both a validation and refinement tool, helping clients see AI as more than just a way to save costs.

Many clients initially focus on the bottom line - thinking, “AI will cut 30% off my costs.” But our approach is to make it clear from the outset that we do things differently. Following this framework ensures a better end product - one you can confidently stand behind because it’s been verified ethically and socially.

Over the next year or so, I expect this approach will become a necessity. Think two, three, or four years down the road - when AI is impacting jobs on a much larger scale. Businesses that can demonstrate strong ethical and social responsibility will undoubtedly hold a competitive edge.

Dave: It sounds like we have a real opportunity to lead by example here - not just helping clients adopt AI responsibly, but showing them the value of integrating ethics and social impact into their decision-making. By refining this into a practical tool, making it part of our standard approach, we’ll not only be delivering better outcomes - we’ll also be helping to shape a future where AI is used in ways that truly benefit everyone.