Artificial intelligence can be a double-edged sword, cutting the hand that is also benefits. In the cybersecurity landscape, AI has the power to fuel stronger-than-ever defenses. Inevitably, it is also driving more sophisticated, effective attacks in the hands of threat actors.
Cyber insurance companies and their policyholders will need to prepare for how AI capabilities, for good and ill, will impact cybersecurity and insurance coverage. How will cyber insurance be reshaped by the explosion of AI technology?
AI in the Hands of Threat Actors and Defenders
AI can power more advanced, large-scale attacks. It can help hackers more swiftly find and exploit vulnerabilities in their targets’ systems. Novice threat actors, state-based groups with extensive resources, and every hacker in between can find ways to use AI.
As threat actors ramp up their AI capabilities, so can the organizations they target. “Attacker speed may mean that new, AI-based defensive tools that can defend and remediate at comparable speed may be needed–and soon,” says Jamie Gerber, CFO of cybersecurity company SimSpace.
AI can be used in threat detection and prevention. “To identify possible cyberattacks, AI-based systems may continuously analyze network traffic, user activity, and system logs,” explains Sharmeen Rehman, a cyber insurance evangelist at Blackfire Cyber Insurance. “These systems can proactively identify and mitigate risks by studying patterns and abnormalities, assisting policyholders in preventing or minimizing cyber disasters.”
AI can also be a powerful tool in incident response and recovery. If AI does hasten data and system recovery following an incident, it could have a hand in limiting losses for policyholders and cyber insurers, Rehman points out.
While companies can greatly benefit from AI’s defensive capabilities, it also introduces risk into their operations. And cyber insurance companies are, naturally, concerned with understanding this risk. Rachel Rossini, product manager, cyber and technology E&O at AXA XL, notes that there is an element of privacy risk with platforms like ChatGPT. What kind of information are employees entering?
In May, Samsung employees inadvertently uploaded sensitive code to ChatGPT, which lead to the company banning generative AI tools, Bloomberg reports. An indefinite, blanket ban is not feasible for most companies, but those that are using AI will need to acknowledge and mitigate the associated risk.
“How do you educate your employees on using these platforms? How do you make sure that’s … enforced?” Rossini asks. The answers to those questions could play a role in cyber insurance coverage in the future.
In other more established insurance arenas, underwriting is a well-defined process. This is not the case in cyber insurance. “There’s no real standard process in which it’s done,” shares Rossini.
The underwriting models of today are not equipped to adequately assess risk, according to Gerber. “The lack of effective underwriting models means that insurers tend to focus on how many security tools a company has rather than the effectiveness of their people, processes, and technology,” he argues. “Without validating whether their tools are effective against AI-enabled attacks, or severe attacks more generally, insurers will not know how prepared an organization is to face their worst day in cybersecurity.”
AI is changing the way cyber insurers view and assess risk. For example, AXA XL is exploring ways this technology can make underwriting more efficient and accurate. Ran Lin, global data science and applied AI lead at AXA XL, and her team are aiming to leverage AI automation to shoulder some of the tedious document review work that underwriters must do. They are also looking at predictive modeling to help segment potential policyholder risk.
They are taking a cautious approach. “This is still in the exploratory phase,” Lin tells InformationWeek. “I think the entire industry is all trying to find the successful use cases.”
Paul Bantick, group head of cyber risks at specialist insurer Beazley, points out that data quality will be an important factor as insurance companies use AI to drive underwriting and claims decisions. He believes the cyber insurance industry needs to improve how it records and interrogates the data at its disposal. “The insurers with the best data will likely win in market competition,” he says.
Changes in the risk landscape and underwriting process lead to questions about premiums. Will AI make cyber insurance more or less expensive?
Over the past few years, cyber insurance premiums have been surging. A report from insurance broker Howden Broking found that annual rates increased more than 100% in the first half of last year. The report also notes that premiums have been flat or even declined slightly in recent months. But growing cyber risk and demand for coverage aren’t going away.
More accurate underwriting, driven by AI, could eventually translate into lower costs for policyholders.
“Insurance companies will use AI to determine the cyber resilience of their customers across different factors including deployed network security, data security capabilities, policy settings and training and education, which can in turn be reflected in a quoted premium,” says Danny Allan, CTO of Veeam.
But policyholders will need the cybersecurity capabilities to earn those lower premiums, which will likely mean leveraging AI in their cybersecurity strategy. If they cannot defend against rapidly evolving AI-driven threats, their premiums are going to be more expensive.
“Insurers need to drive companies to be able to prove effectiveness against all of these kinds of cyber impacts that can have a material impact for them to deserve the best rates — or even being insurable at all,” says Gerber.
AI automation could make the claims filing, evaluation, and settlement processes more efficient. “Algorithms that use machine learning can examine claims data, spot irregularities, and highlight possibly fraudulent behavior,” says Rehman. “This increases effectiveness, lowers expenses, and assures that policyholders’ requirements are met more quickly.”
Insurance companies could also use AI to examine claims data and better understand the connection between controls and loss. “[AI] will also theoretically help with the claims and bring down our loss ratio if we’re able to communicate the data to our clients,” Rossini explains.
The number of claims linked to the nefarious use of AI is also likely to rise. “We have already begun to see claims notifications as a result of deepfakes used in social engineering attacks where AI has been used to replicate the voices of C-suite members on audio calls and in images and video,” shares Bantick.
As the use of AI continues, policyholders and insurance companies may clash on claim denials and coverage exclusions.
Pharmaceutical company Merck recently won a case in an ongoing legal battle to receive insurance coverage for a 2017 NotPetya attack attributed to Russia. Insurers attempted to deny coverage by invoking the war exclusion clause. A New Jersey appellate court ruled in the favor of the company in its case against its insurers. The court ruled that the insurers must help cover Merck’s $1.4 billion losses, The Wall Street Journal reports.
“We conclude the Insurers did not demonstrate the exclusion applied under the circumstances of this case, namely, that this cyberattack was a ‘hostile’ or ‘warlike’ action as contemplated under the exclusion,” according to the opinion of the court.
Cyber warfare and geopolitical tensions are on the rise, and critical infrastructure organizations are in the crosshairs. What will AI-powered cyber warfare mean for insurance coverage going forward? How will the war exclusion clause be interpreted? Answering these questions won’t be easy.
“Cyberattacks inherently have significant attribution challenges to begin with, especially if you add to that the additional challenges of autonomous operation,” says Gerber. “To compound this, cyber is often chosen by adversaries specifically because it can operate below the thresholds of war as we commonly define it.”
The road ahead for cyber insurance companies and policyholders may prove to be a rocky one; answers to these questions may very well continue to be sought in court.
“While the relationship between insurance companies and policyholders is still being interpreted, the legal ramifications of the act of war exclusion and the opaque nature of AI threats are presenting complex challenges for large businesses,” warns Gerber.
The use of AI does not come without thorny ethical issues. Cyber insurance companies and policyholders will need to recognize and address these issues as part and parcel of doing business with AI. Accountability, privacy, and bias all have insurance implications. Right now, there are more ethical questions than answers — a recognizable theme with many aspects of AI — as the development of this nascent technology booms.
Seven major companies developing AI have agreed to voluntary guidelines, but this is a far cry from defined regulation. How will AI regulation address ethical use? Will policyholders be covered by their cyber insurance if they fail to meet these ethical obligations?
“AI-related risks and liabilities, such as AI algorithm faults or unlawful data usage by AI systems, may need to be covered by cyber insurance plans,” says Rehman. What could that coverage look like? How much will it cost? What will be the limitations?
Insurers will also have to operate within yet-to-be-defined ethical boundaries. “To prevent any biases or unforeseen repercussions in premium calculations, insurers will need to strike a balance between utilizing AI’s capabilities and guaranteeing fairness, transparency, and ethical usage of data,” Rehman explains.
AI is a reality of doing business today, but the future it will bring is not entirely known. It is up to cyber insurance companies and their policyholders to explore the power of AI and prepare for both its benefits and challenges.