International
Insurer & Reinsurer

Danny’s insights on AI

What is fair and responsible AI?

We have just witnessed the release of the latest version of Chat GPT, and what I have observed so far is truly astounding in terms of its capabilities.
It takes instructions and proactively analyses what is asked and produces answers within seconds – to a very high quality.

AI in the insurance space

With this power, there are a host of other things that can be done. Some of it would be for the benefit of businesses and consumers, but it could also be used for negative reasons as well. It can shorten the time it takes to put together PowerPoint presentations, take minutes, summarise meetings and even draft correspondence according to a specified tone and language. Furthermore, it would be able to provide legal opinions, to the same level as qualified experienced attorneys.

What we are finding more and more in the insurance space is that AI is able to underwrite policies by applying specific rating tables and underwriting criteria, as well as administering claims in ways human beings are not able to do. It is able to assess damages on vehicles from photographs as an example, source part prices and then provide a final assessment quote within seconds. It can check facts provided by policyholders for fraud, but we are still in the early stages comparatively speaking.


A dangerous shadow that comes in with it

Because of the power of the tool (I only touched a small number of examples above) there is a dangerous shadow that comes in with it. There still needs to be a human specialist to be able to check whether the output that is produced is correct and accurate. The longer AI becomes part of our lives, such experts (needed to check AI) will become fewer and fewer as more of the younger generation will become reliant on the output without needing to think and analyse the data that gets given. For example, if a legal opinion is produced, you would need a lawyer who can still read the opinion and agree on the level of its findings. Already the quality of case law research that can be done and analysed is very impressive. You need the attorneys who grew up without AI to be available.

The other dark side is the clear potential for fraud, theft and other criminal behaviour. We are already very reliant on voice and facial recognition to identify ourselves. AI is already at a point where it can imitate people’s looks, voices and characteristics and theft of identity will be common to hack into bank accounts, if this has not happened already. Our phones and sensitive applications are easy to access with facial recognition. Imagine if an AI-supported call phoned you in your spouse’s voice to confirm a password for the credit card or bank account. The dangers as well of AI making major decisions for people in sensitive cases like wars, criminal prosecutions and other public policies shows the danger of where this can lead.

The major world legal jurisdictions are in the process of drafting regulations to deal with AI to make sure it’s fairly and responsibly used by corporates, and it’s done for the right purposes with proper checks and balances. If it is left to run unchecked and unregulated, many areas that are currently being administered by human beings may be taken over by machines which can be dangerous, given how early we are in the development.

Developments in this area

Out of curiosity, I recently saw a new dental procedure where robots are conducting all examinations. There must be established standards for these AI-supported robots, as they influence both health and financial advice, with AI systems now managing unit trusts. Can a computer be held legally responsible? There are significant legal and IT developments in this area.

Danny Joffe
Director & Chairman of Risk Committee

Same Topic