The UK outlines plans to regulate AI startups


From masters of the digital universe to pariahs peddling a machine-dominated dystopia. Perhaps that’s not quite the path AI developers have taken, but in recent months the debate over the benefits and risks of artificial intelligence tools has intensified, fueled in part by the arrival of Chat GPT on our desktops. Against this backdrop, the UK government has published plans to regulate the sector. What does this mean for startups?

When submitting proposals for a regulation framehas promised the government a light-hearted, innovation-friendly approach while addressing public concerns.

And startups working in the sector were likely relieved to hear the government mention the opportunities instead of highlighting the risks. As Minister for Science, Innovation and Technology, Michelle Donelan put it in her preview of the published proposals: “AI is already delivering fantastic social and economic benefits to real people – from improving NHS medical care to making transport safer . Recent developments in things like generative AI give us a glimpse into the tremendous opportunities that lie ahead in the near future.”

So, mindful of the need to help Britain’s AI startups – which collectively raised more than $4.65 billion in venture capital investments last year – the government has shied away from doing anything too radical. There will be no new supervisor. Instead, communications watchdog Ofcom and the Competitions and Market Authority (CMA) will share the heavy lifting. And oversight will be based on broad principles of safety, transparency, accountability and governance, and access to redress rather than being overly prescriptive.


A range of AI risks

Nevertheless, the government identified a range of possible drawbacks. These include risks to human rights, fairness, public safety, social cohesion, privacy and security.

For example, generative AI – technologies that produce content in the form of words, audio, images and video – can threaten jobs, put educators in trouble or produce images that blur the line between fiction and reality. Decision AI – widely used by banks to review loan applications and identify potential fraud – has already been criticized for producing results that simply reflect the industry’s pre-existing biases, thus providing a sort of validation for dishonesty. Then, of course, there’s the AI ​​that will support driverless cars or autonomous weapon systems. The kind of software that makes life or death decisions. That’s a lot for regulators to get their heads around. If they are wrong, they may nip innovation in the bud or fail to properly address real problems.

So what does this mean for startups working in the sector? Last week I spoke with Darko Matovski, CEO and co-founder of CausaLens, a provider of AI-driven decision-making tools.

The need for regulation

“Regulation is needed,” he says. “Any system that could affect people’s livelihoods needs to be regulated.”

But he acknowledges that it won’t be easy given the complexity of the software on offer and the diversity of technologies within the industry.

Matovski’s own company, CauseLens, offers AI solutions that support decision making. To date, the company — which raised $45 million from VCs last year — has sold its products in markets such as financial services, manufacturing and healthcare. The use cases include price optimization, supply chain optimization, financial services risk management, and market modeling.

At first glance, decision-making software should not be controversial. Data is collected, crunched and analyzed to enable companies to make better and automated choices. But it is of course controversial because of the danger of inherent biases when the software is “trained” to make those choices.

So the way Matovski sees it, the challenge is to create software that removes the bias. “We wanted to create AI that people can rely on,” he says. To do that, the company’s approach has been to create a solution that effectively monitors cause and effect on an ongoing basis. This allows the software to adapt to how an environment – for example a complex supply chain – responds to events or changes, and feeds this into decision making. The idea is that decisions are made based on what is happening in real time.

Perhaps the bigger point is that startups need to think about addressing the risks associated with their particular taste of AI.

Keep pace

But here’s the question. With dozens or perhaps hundreds of AI startups developing solutions, how do regulators keep up with the pace of technology development without hindering innovation? After all, regulating social media has proven difficult enough.

Matovski says technology companies need to think in terms of addressing risk and operate transparently. “We want to be ahead of the regulator,” he says. “And we want to have a model that can be explained to regulators.”

The government, in turn, wants to encourage dialogue and cooperation between regulators, civil society organizations and AI startups and scale-ups. At least that’s what the White Paper says.

Room on the Market

Part of the UK government’s intention in drawing up its regulatory plans is to complement an existing AI strategy. The key is to provide a fertile environment for innovators to gain market traction and grow.

This raises the question of how much space there is in the market for young companies. The recent publicity around generative AI has focused on Google’s Bard software and Microsoft’s relationship with Chat GPT maker OpenAI. Is this a market for big tech players with deep pockets?

Matovski thinks not. “AI is pretty big,” he says. “There’s enough for everyone.” Pointing to his own slice of the market, he argues that “causal” AI technology has yet to be fully exploited by the bigger players, leaving room for new companies to gain market share.

The challenge for everyone working in the marketplace is to build trust and address the genuine concerns of citizens and their governments?